00:00:00.001 Started by upstream project "autotest-per-patch" build number 126224 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.111 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/freebsd-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.112 The recommended git tool is: git 00:00:00.112 using credential 00000000-0000-0000-0000-000000000002 00:00:00.114 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/freebsd-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.166 Fetching changes from the remote Git repository 00:00:00.169 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.209 Using shallow fetch with depth 1 00:00:00.209 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.209 > git --version # timeout=10 00:00:00.241 > git --version # 'git version 2.39.2' 00:00:00.241 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.262 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.262 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.584 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.597 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.608 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:06.608 > git config core.sparsecheckout # timeout=10 00:00:06.620 > git read-tree -mu HEAD # timeout=10 00:00:06.636 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:06.658 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:06.659 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:06.789 [Pipeline] Start of Pipeline 00:00:06.805 [Pipeline] library 00:00:06.807 Loading library shm_lib@master 00:00:06.807 Library shm_lib@master is cached. Copying from home. 00:00:06.821 [Pipeline] node 00:00:06.828 Running on VM-host-SM17 in /var/jenkins/workspace/freebsd-vg-autotest 00:00:06.829 [Pipeline] { 00:00:06.839 [Pipeline] catchError 00:00:06.841 [Pipeline] { 00:00:06.850 [Pipeline] wrap 00:00:06.856 [Pipeline] { 00:00:06.862 [Pipeline] stage 00:00:06.864 [Pipeline] { (Prologue) 00:00:06.881 [Pipeline] echo 00:00:06.882 Node: VM-host-SM17 00:00:06.889 [Pipeline] cleanWs 00:00:06.897 [WS-CLEANUP] Deleting project workspace... 00:00:06.897 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.901 [WS-CLEANUP] done 00:00:07.090 [Pipeline] setCustomBuildProperty 00:00:07.153 [Pipeline] httpRequest 00:00:07.175 [Pipeline] echo 00:00:07.176 Sorcerer 10.211.164.101 is alive 00:00:07.183 [Pipeline] httpRequest 00:00:07.186 HttpMethod: GET 00:00:07.187 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.187 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.189 Response Code: HTTP/1.1 200 OK 00:00:07.189 Success: Status code 200 is in the accepted range: 200,404 00:00:07.189 Saving response body to /var/jenkins/workspace/freebsd-vg-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:08.400 [Pipeline] sh 00:00:08.675 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:08.711 [Pipeline] httpRequest 00:00:08.725 [Pipeline] echo 00:00:08.726 Sorcerer 10.211.164.101 is alive 00:00:08.732 [Pipeline] httpRequest 00:00:08.735 HttpMethod: GET 00:00:08.735 URL: http://10.211.164.101/packages/spdk_455fda46502a1fd840706d13900761ca4d1d4bc5.tar.gz 00:00:08.736 Sending request to url: http://10.211.164.101/packages/spdk_455fda46502a1fd840706d13900761ca4d1d4bc5.tar.gz 00:00:08.757 Response Code: HTTP/1.1 200 OK 00:00:08.758 Success: Status code 200 is in the accepted range: 200,404 00:00:08.758 Saving response body to /var/jenkins/workspace/freebsd-vg-autotest/spdk_455fda46502a1fd840706d13900761ca4d1d4bc5.tar.gz 00:00:47.533 [Pipeline] sh 00:00:47.822 + tar --no-same-owner -xf spdk_455fda46502a1fd840706d13900761ca4d1d4bc5.tar.gz 00:00:51.161 [Pipeline] sh 00:00:51.439 + git -C spdk log --oneline -n5 00:00:51.439 455fda465 nvme_pci: ctrlr_scan_attached callback 00:00:51.439 a732bf2a5 nvme_transport: optional callback to scan attached 00:00:51.439 2728651ee accel: adjust task per ch define name 00:00:51.439 e7cce062d Examples/Perf: correct the calculation of total bandwidth 00:00:51.439 3b4b1d00c libvfio-user: bump MAX_DMA_REGIONS 00:00:51.464 [Pipeline] writeFile 00:00:51.481 [Pipeline] sh 00:00:51.760 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:51.769 [Pipeline] sh 00:00:52.043 + cat autorun-spdk.conf 00:00:52.043 SPDK_TEST_UNITTEST=1 00:00:52.043 SPDK_RUN_VALGRIND=0 00:00:52.043 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.043 SPDK_TEST_NVME=1 00:00:52.043 SPDK_TEST_BLOCKDEV=1 00:00:52.043 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:52.049 RUN_NIGHTLY=0 00:00:52.052 [Pipeline] } 00:00:52.067 [Pipeline] // stage 00:00:52.081 [Pipeline] stage 00:00:52.083 [Pipeline] { (Run VM) 00:00:52.098 [Pipeline] sh 00:00:52.375 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:52.375 + echo 'Start stage prepare_nvme.sh' 00:00:52.375 Start stage prepare_nvme.sh 00:00:52.375 + [[ -n 5 ]] 00:00:52.375 + disk_prefix=ex5 00:00:52.375 + [[ -n /var/jenkins/workspace/freebsd-vg-autotest ]] 00:00:52.375 + [[ -e /var/jenkins/workspace/freebsd-vg-autotest/autorun-spdk.conf ]] 00:00:52.375 + source /var/jenkins/workspace/freebsd-vg-autotest/autorun-spdk.conf 00:00:52.375 ++ SPDK_TEST_UNITTEST=1 00:00:52.375 ++ SPDK_RUN_VALGRIND=0 00:00:52.375 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.375 ++ SPDK_TEST_NVME=1 00:00:52.375 ++ SPDK_TEST_BLOCKDEV=1 00:00:52.375 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:52.375 ++ RUN_NIGHTLY=0 00:00:52.375 + cd /var/jenkins/workspace/freebsd-vg-autotest 00:00:52.375 + nvme_files=() 00:00:52.375 + declare -A nvme_files 00:00:52.375 + backend_dir=/var/lib/libvirt/images/backends 00:00:52.375 + nvme_files['nvme.img']=5G 00:00:52.375 + nvme_files['nvme-cmb.img']=5G 00:00:52.375 + nvme_files['nvme-multi0.img']=4G 00:00:52.375 + nvme_files['nvme-multi1.img']=4G 00:00:52.375 + nvme_files['nvme-multi2.img']=4G 00:00:52.375 + nvme_files['nvme-openstack.img']=8G 00:00:52.375 + nvme_files['nvme-zns.img']=5G 00:00:52.375 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:52.375 + (( SPDK_TEST_FTL == 1 )) 00:00:52.375 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:52.375 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:52.375 + for nvme in "${!nvme_files[@]}" 00:00:52.375 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:00:52.375 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:52.375 + for nvme in "${!nvme_files[@]}" 00:00:52.375 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:00:52.375 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:52.375 + for nvme in "${!nvme_files[@]}" 00:00:52.375 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:00:52.375 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:52.375 + for nvme in "${!nvme_files[@]}" 00:00:52.375 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:00:52.376 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:52.376 + for nvme in "${!nvme_files[@]}" 00:00:52.376 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:00:52.376 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:52.376 + for nvme in "${!nvme_files[@]}" 00:00:52.376 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:00:52.376 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:52.376 + for nvme in "${!nvme_files[@]}" 00:00:52.376 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:00:53.751 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:53.751 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:00:53.751 + echo 'End stage prepare_nvme.sh' 00:00:53.751 End stage prepare_nvme.sh 00:00:53.763 [Pipeline] sh 00:00:54.048 + DISTRO=freebsd14 CPUS=10 RAM=14336 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:54.048 Setup: -n 10 -s 14336 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -H -a -v -f freebsd14 00:00:54.048 00:00:54.048 DIR=/var/jenkins/workspace/freebsd-vg-autotest/spdk/scripts/vagrant 00:00:54.048 SPDK_DIR=/var/jenkins/workspace/freebsd-vg-autotest/spdk 00:00:54.048 VAGRANT_TARGET=/var/jenkins/workspace/freebsd-vg-autotest 00:00:54.048 HELP=0 00:00:54.048 DRY_RUN=0 00:00:54.048 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img, 00:00:54.048 NVME_DISKS_TYPE=nvme, 00:00:54.048 NVME_AUTO_CREATE=0 00:00:54.048 NVME_DISKS_NAMESPACES=, 00:00:54.048 NVME_CMB=, 00:00:54.048 NVME_PMR=, 00:00:54.048 NVME_ZNS=, 00:00:54.048 NVME_MS=, 00:00:54.048 NVME_FDP=, 00:00:54.048 SPDK_VAGRANT_DISTRO=freebsd14 00:00:54.048 SPDK_VAGRANT_VMCPU=10 00:00:54.048 SPDK_VAGRANT_VMRAM=14336 00:00:54.048 SPDK_VAGRANT_PROVIDER=libvirt 00:00:54.048 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:54.048 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:54.048 SPDK_OPENSTACK_NETWORK=0 00:00:54.048 VAGRANT_PACKAGE_BOX=0 00:00:54.048 VAGRANTFILE=/var/jenkins/workspace/freebsd-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:54.048 FORCE_DISTRO=true 00:00:54.048 VAGRANT_BOX_VERSION= 00:00:54.048 EXTRA_VAGRANTFILES= 00:00:54.048 NIC_MODEL=e1000 00:00:54.048 00:00:54.048 mkdir: created directory '/var/jenkins/workspace/freebsd-vg-autotest/freebsd14-libvirt' 00:00:54.048 /var/jenkins/workspace/freebsd-vg-autotest/freebsd14-libvirt /var/jenkins/workspace/freebsd-vg-autotest 00:00:57.333 Bringing machine 'default' up with 'libvirt' provider... 00:00:58.267 ==> default: Creating image (snapshot of base box volume). 00:00:58.267 ==> default: Creating domain with the following settings... 00:00:58.267 ==> default: -- Name: freebsd14-14.0-RELEASE-1718332871-2294_default_1721063993_0f97593a3d8fce1a7455 00:00:58.267 ==> default: -- Domain type: kvm 00:00:58.267 ==> default: -- Cpus: 10 00:00:58.267 ==> default: -- Feature: acpi 00:00:58.267 ==> default: -- Feature: apic 00:00:58.267 ==> default: -- Feature: pae 00:00:58.267 ==> default: -- Memory: 14336M 00:00:58.267 ==> default: -- Memory Backing: hugepages: 00:00:58.267 ==> default: -- Management MAC: 00:00:58.267 ==> default: -- Loader: 00:00:58.267 ==> default: -- Nvram: 00:00:58.267 ==> default: -- Base box: spdk/freebsd14 00:00:58.267 ==> default: -- Storage pool: default 00:00:58.267 ==> default: -- Image: /var/lib/libvirt/images/freebsd14-14.0-RELEASE-1718332871-2294_default_1721063993_0f97593a3d8fce1a7455.img (32G) 00:00:58.267 ==> default: -- Volume Cache: default 00:00:58.267 ==> default: -- Kernel: 00:00:58.267 ==> default: -- Initrd: 00:00:58.267 ==> default: -- Graphics Type: vnc 00:00:58.267 ==> default: -- Graphics Port: -1 00:00:58.267 ==> default: -- Graphics IP: 127.0.0.1 00:00:58.267 ==> default: -- Graphics Password: Not defined 00:00:58.267 ==> default: -- Video Type: cirrus 00:00:58.267 ==> default: -- Video VRAM: 9216 00:00:58.267 ==> default: -- Sound Type: 00:00:58.267 ==> default: -- Keymap: en-us 00:00:58.267 ==> default: -- TPM Path: 00:00:58.267 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:58.267 ==> default: -- Command line args: 00:00:58.267 ==> default: -> value=-device, 00:00:58.267 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:58.267 ==> default: -> value=-drive, 00:00:58.267 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:00:58.267 ==> default: -> value=-device, 00:00:58.267 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:58.526 ==> default: Creating shared folders metadata... 00:00:58.526 ==> default: Starting domain. 00:01:00.472 ==> default: Waiting for domain to get an IP address... 00:01:22.388 ==> default: Waiting for SSH to become available... 00:01:34.583 ==> default: Configuring and enabling network interfaces... 00:01:38.764 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:53.626 ==> default: Mounting SSHFS shared folder... 00:01:53.626 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest/freebsd14-libvirt/output => /home/vagrant/spdk_repo/output 00:01:53.626 ==> default: Checking Mount.. 00:01:54.560 ==> default: Folder Successfully Mounted! 00:01:54.560 ==> default: Running provisioner: file... 00:01:55.495 default: ~/.gitconfig => .gitconfig 00:01:56.429 00:01:56.429 SUCCESS! 00:01:56.429 00:01:56.429 cd to /var/jenkins/workspace/freebsd-vg-autotest/freebsd14-libvirt and type "vagrant ssh" to use. 00:01:56.429 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:56.429 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/freebsd-vg-autotest/freebsd14-libvirt" to destroy all trace of vm. 00:01:56.429 00:01:56.438 [Pipeline] } 00:01:56.455 [Pipeline] // stage 00:01:56.464 [Pipeline] dir 00:01:56.465 Running in /var/jenkins/workspace/freebsd-vg-autotest/freebsd14-libvirt 00:01:56.467 [Pipeline] { 00:01:56.482 [Pipeline] catchError 00:01:56.483 [Pipeline] { 00:01:56.498 [Pipeline] sh 00:01:56.774 + vagrant ssh-config --host vagrant 00:01:56.774 + sed -ne /^Host/,$p 00:01:56.774 + tee ssh_conf 00:02:00.978 Host vagrant 00:02:00.978 HostName 192.168.121.106 00:02:00.978 User vagrant 00:02:00.978 Port 22 00:02:00.978 UserKnownHostsFile /dev/null 00:02:00.978 StrictHostKeyChecking no 00:02:00.978 PasswordAuthentication no 00:02:00.978 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-freebsd14/14.0-RELEASE-1718332871-2294/libvirt/freebsd14 00:02:00.978 IdentitiesOnly yes 00:02:00.978 LogLevel FATAL 00:02:00.978 ForwardAgent yes 00:02:00.978 ForwardX11 yes 00:02:00.978 00:02:00.993 [Pipeline] withEnv 00:02:00.995 [Pipeline] { 00:02:01.009 [Pipeline] sh 00:02:01.286 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:01.286 source /etc/os-release 00:02:01.286 [[ -e /image.version ]] && img=$(< /image.version) 00:02:01.286 # Minimal, systemd-like check. 00:02:01.286 if [[ -e /.dockerenv ]]; then 00:02:01.286 # Clear garbage from the node's name: 00:02:01.286 # agt-er_autotest_547-896 -> autotest_547-896 00:02:01.286 # $HOSTNAME is the actual container id 00:02:01.286 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:01.286 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:01.286 # We can assume this is a mount from a host where container is running, 00:02:01.286 # so fetch its hostname to easily identify the target swarm worker. 00:02:01.286 container="$(< /etc/hostname) ($agent)" 00:02:01.286 else 00:02:01.286 # Fallback 00:02:01.286 container=$agent 00:02:01.286 fi 00:02:01.286 fi 00:02:01.286 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:01.286 00:02:01.296 [Pipeline] } 00:02:01.316 [Pipeline] // withEnv 00:02:01.325 [Pipeline] setCustomBuildProperty 00:02:01.340 [Pipeline] stage 00:02:01.342 [Pipeline] { (Tests) 00:02:01.362 [Pipeline] sh 00:02:01.640 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:01.911 [Pipeline] sh 00:02:02.234 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:02.248 [Pipeline] timeout 00:02:02.249 Timeout set to expire in 1 hr 30 min 00:02:02.250 [Pipeline] { 00:02:02.265 [Pipeline] sh 00:02:02.539 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:03.105 HEAD is now at 455fda465 nvme_pci: ctrlr_scan_attached callback 00:02:03.119 [Pipeline] sh 00:02:03.399 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:03.414 [Pipeline] sh 00:02:03.694 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:03.712 [Pipeline] sh 00:02:03.992 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant CXX=/usr/bin/clang++ CC=/usr/bin/clang JOB_BASE_NAME=freebsd-vg-autotest ./autoruner.sh spdk_repo 00:02:03.992 ++ readlink -f spdk_repo 00:02:03.992 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:03.992 + [[ -n /home/vagrant/spdk_repo ]] 00:02:03.992 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:03.992 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:03.992 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:03.992 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:03.992 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:03.992 + [[ freebsd-vg-autotest == pkgdep-* ]] 00:02:03.992 + cd /home/vagrant/spdk_repo 00:02:03.992 + source /etc/os-release 00:02:03.992 ++ NAME=FreeBSD 00:02:03.992 ++ VERSION=14.0-RELEASE 00:02:03.992 ++ VERSION_ID=14.0 00:02:03.992 ++ ID=freebsd 00:02:03.992 ++ ANSI_COLOR='0;31' 00:02:03.992 ++ PRETTY_NAME='FreeBSD 14.0-RELEASE' 00:02:03.992 ++ CPE_NAME=cpe:/o:freebsd:freebsd:14.0 00:02:03.992 ++ HOME_URL=https://FreeBSD.org/ 00:02:03.992 ++ BUG_REPORT_URL=https://bugs.FreeBSD.org/ 00:02:03.992 + uname -a 00:02:03.992 FreeBSD freebsd-cloud-1718332871-2294.local 14.0-RELEASE FreeBSD 14.0-RELEASE #0 releng/14.0-n265380-f9716eee8ab4: Fri Nov 10 05:57:23 UTC 2023 root@releng1.nyi.freebsd.org:/usr/obj/usr/src/amd64.amd64/sys/GENERIC amd64 00:02:03.992 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:04.251 Contigmem (not present) 00:02:04.251 Buffer Size: not set 00:02:04.251 Num Buffers: not set 00:02:04.251 00:02:04.251 00:02:04.251 Type BDF Vendor Device Driver 00:02:04.251 NVMe 0:0:16:0 0x1b36 0x0010 nvme0 00:02:04.251 + rm -f /tmp/spdk-ld-path 00:02:04.251 + source autorun-spdk.conf 00:02:04.251 ++ SPDK_TEST_UNITTEST=1 00:02:04.251 ++ SPDK_RUN_VALGRIND=0 00:02:04.251 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:04.251 ++ SPDK_TEST_NVME=1 00:02:04.251 ++ SPDK_TEST_BLOCKDEV=1 00:02:04.251 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:04.251 ++ RUN_NIGHTLY=0 00:02:04.251 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:04.251 + [[ -n '' ]] 00:02:04.251 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:04.251 + for M in /var/spdk/build-*-manifest.txt 00:02:04.251 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:04.251 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:04.251 + for M in /var/spdk/build-*-manifest.txt 00:02:04.251 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:04.251 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:04.251 ++ uname 00:02:04.251 + [[ FreeBSD == \L\i\n\u\x ]] 00:02:04.251 + dmesg_pid=1231 00:02:04.251 + [[ FreeBSD == FreeBSD ]] 00:02:04.251 + tail -F /var/log/messages 00:02:04.251 + export LC_ALL=C LC_CTYPE=C 00:02:04.251 + LC_ALL=C 00:02:04.251 + LC_CTYPE=C 00:02:04.251 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:04.251 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:04.251 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:04.251 + [[ -x /usr/src/fio-static/fio ]] 00:02:04.251 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:04.251 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:04.251 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:04.251 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:02:04.251 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:04.251 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:04.251 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:04.251 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:04.251 Test configuration: 00:02:04.251 SPDK_TEST_UNITTEST=1 00:02:04.251 SPDK_RUN_VALGRIND=0 00:02:04.251 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:04.251 SPDK_TEST_NVME=1 00:02:04.251 SPDK_TEST_BLOCKDEV=1 00:02:04.251 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:04.251 RUN_NIGHTLY=0 17:21:00 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:04.251 17:21:00 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:04.251 17:21:00 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:04.251 17:21:00 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:04.251 17:21:00 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:02:04.251 17:21:00 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:02:04.251 17:21:00 -- paths/export.sh@4 -- $ export PATH 00:02:04.251 17:21:00 -- paths/export.sh@5 -- $ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:02:04.251 17:21:00 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:04.509 17:21:00 -- common/autobuild_common.sh@444 -- $ date +%s 00:02:04.509 17:21:00 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721064060.XXXXXX 00:02:04.509 17:21:00 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721064060.XXXXXX.XDLjRkdUkf 00:02:04.509 17:21:00 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:02:04.509 17:21:00 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:02:04.509 17:21:00 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:04.509 17:21:00 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:04.510 17:21:00 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:04.510 17:21:00 -- common/autobuild_common.sh@460 -- $ get_config_params 00:02:04.510 17:21:00 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:04.510 17:21:00 -- common/autotest_common.sh@10 -- $ set +x 00:02:04.510 17:21:00 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:02:04.510 17:21:00 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:02:04.510 17:21:00 -- pm/common@17 -- $ local monitor 00:02:04.510 17:21:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:04.510 17:21:00 -- pm/common@25 -- $ sleep 1 00:02:04.510 17:21:00 -- pm/common@21 -- $ date +%s 00:02:04.510 17:21:00 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1721064060 00:02:04.510 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1721064060_collect-vmstat.pm.log 00:02:05.456 17:21:01 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:02:05.456 17:21:01 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:05.456 17:21:01 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:05.456 17:21:01 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:05.456 17:21:01 -- spdk/autobuild.sh@16 -- $ date -u 00:02:05.456 Mon Jul 15 17:21:01 UTC 2024 00:02:05.456 17:21:01 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:05.456 v24.09-pre-208-g455fda465 00:02:05.456 17:21:01 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:05.456 17:21:01 -- spdk/autobuild.sh@23 -- $ '[' 0 -eq 1 ']' 00:02:05.456 17:21:01 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:05.456 17:21:01 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:05.456 17:21:01 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:05.456 17:21:01 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:05.456 17:21:01 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:05.456 17:21:01 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:02:05.456 17:21:01 -- spdk/autobuild.sh@58 -- $ unittest_build 00:02:05.456 17:21:01 -- common/autobuild_common.sh@420 -- $ run_test unittest_build _unittest_build 00:02:05.456 17:21:01 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:02:05.456 17:21:01 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:05.456 17:21:01 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.456 ************************************ 00:02:05.456 START TEST unittest_build 00:02:05.456 ************************************ 00:02:05.456 17:21:01 unittest_build -- common/autotest_common.sh@1123 -- $ _unittest_build 00:02:05.456 17:21:01 unittest_build -- common/autobuild_common.sh@411 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --without-shared 00:02:06.022 Notice: Vhost, rte_vhost library, virtio, and fuse 00:02:06.022 are only supported on Linux. Turning off default feature. 00:02:06.330 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:06.330 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:06.892 RDMA_OPTION_ID_ACK_TIMEOUT is not supported 00:02:06.892 Using 'verbs' RDMA provider 00:02:17.183 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:27.224 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:27.224 Creating mk/config.mk...done. 00:02:27.224 Creating mk/cc.flags.mk...done. 00:02:27.224 Type 'gmake' to build. 00:02:27.224 17:21:22 unittest_build -- common/autobuild_common.sh@412 -- $ gmake -j10 00:02:27.224 gmake[1]: Nothing to be done for 'all'. 00:02:31.473 ps: stdin: not a terminal 00:02:36.738 The Meson build system 00:02:36.738 Version: 1.4.0 00:02:36.738 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:36.738 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:36.738 Build type: native build 00:02:36.738 Program cat found: YES (/bin/cat) 00:02:36.738 Project name: DPDK 00:02:36.738 Project version: 24.03.0 00:02:36.738 C compiler for the host machine: /usr/bin/clang (clang 16.0.6 "FreeBSD clang version 16.0.6 (https://github.com/llvm/llvm-project.git llvmorg-16.0.6-0-g7cbf1a259152)") 00:02:36.738 C linker for the host machine: /usr/bin/clang ld.lld 16.0.6 00:02:36.738 Host machine cpu family: x86_64 00:02:36.738 Host machine cpu: x86_64 00:02:36.738 Message: ## Building in Developer Mode ## 00:02:36.738 Program pkg-config found: YES (/usr/local/bin/pkg-config) 00:02:36.738 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:36.738 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:36.738 Program python3 found: YES (/usr/local/bin/python3.9) 00:02:36.738 Program cat found: YES (/bin/cat) 00:02:36.738 Compiler for C supports arguments -march=native: YES 00:02:36.738 Checking for size of "void *" : 8 00:02:36.738 Checking for size of "void *" : 8 (cached) 00:02:36.738 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:36.738 Library m found: YES 00:02:36.738 Library numa found: NO 00:02:36.738 Library fdt found: NO 00:02:36.738 Library execinfo found: YES 00:02:36.738 Has header "execinfo.h" : YES 00:02:36.738 Found pkg-config: YES (/usr/local/bin/pkg-config) 2.2.0 00:02:36.738 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:36.738 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:36.738 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:36.738 Run-time dependency openssl found: YES 3.0.13 00:02:36.738 Run-time dependency libpcap found: NO (tried pkgconfig) 00:02:36.738 Library pcap found: YES 00:02:36.738 Has header "pcap.h" with dependency -lpcap: YES 00:02:36.738 Compiler for C supports arguments -Wcast-qual: YES 00:02:36.738 Compiler for C supports arguments -Wdeprecated: YES 00:02:36.738 Compiler for C supports arguments -Wformat: YES 00:02:36.738 Compiler for C supports arguments -Wformat-nonliteral: YES 00:02:36.738 Compiler for C supports arguments -Wformat-security: YES 00:02:36.738 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:36.738 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:36.738 Compiler for C supports arguments -Wnested-externs: YES 00:02:36.738 Compiler for C supports arguments -Wold-style-definition: YES 00:02:36.738 Compiler for C supports arguments -Wpointer-arith: YES 00:02:36.738 Compiler for C supports arguments -Wsign-compare: YES 00:02:36.738 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:36.738 Compiler for C supports arguments -Wundef: YES 00:02:36.738 Compiler for C supports arguments -Wwrite-strings: YES 00:02:36.738 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:36.738 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:02:36.738 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:36.738 Compiler for C supports arguments -mavx512f: YES 00:02:36.738 Checking if "AVX512 checking" compiles: YES 00:02:36.738 Fetching value of define "__SSE4_2__" : 1 00:02:36.738 Fetching value of define "__AES__" : 1 00:02:36.738 Fetching value of define "__AVX__" : 1 00:02:36.738 Fetching value of define "__AVX2__" : 1 00:02:36.738 Fetching value of define "__AVX512BW__" : (undefined) 00:02:36.738 Fetching value of define "__AVX512CD__" : (undefined) 00:02:36.738 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:36.738 Fetching value of define "__AVX512F__" : (undefined) 00:02:36.738 Fetching value of define "__AVX512VL__" : (undefined) 00:02:36.738 Fetching value of define "__PCLMUL__" : 1 00:02:36.738 Fetching value of define "__RDRND__" : 1 00:02:36.738 Fetching value of define "__RDSEED__" : 1 00:02:36.738 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:36.738 Fetching value of define "__znver1__" : (undefined) 00:02:36.738 Fetching value of define "__znver2__" : (undefined) 00:02:36.738 Fetching value of define "__znver3__" : (undefined) 00:02:36.738 Fetching value of define "__znver4__" : (undefined) 00:02:36.738 Compiler for C supports arguments -Wno-format-truncation: NO 00:02:36.738 Message: lib/log: Defining dependency "log" 00:02:36.738 Message: lib/kvargs: Defining dependency "kvargs" 00:02:36.738 Message: lib/telemetry: Defining dependency "telemetry" 00:02:36.738 Checking if "Detect argument count for CPU_OR" compiles: YES 00:02:36.738 Checking for function "getentropy" : YES 00:02:36.738 Message: lib/eal: Defining dependency "eal" 00:02:36.738 Message: lib/ring: Defining dependency "ring" 00:02:36.738 Message: lib/rcu: Defining dependency "rcu" 00:02:36.738 Message: lib/mempool: Defining dependency "mempool" 00:02:36.738 Message: lib/mbuf: Defining dependency "mbuf" 00:02:36.738 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:36.738 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:36.738 Compiler for C supports arguments -mpclmul: YES 00:02:36.738 Compiler for C supports arguments -maes: YES 00:02:36.738 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:36.738 Compiler for C supports arguments -mavx512bw: YES 00:02:36.738 Compiler for C supports arguments -mavx512dq: YES 00:02:36.738 Compiler for C supports arguments -mavx512vl: YES 00:02:36.738 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:36.738 Compiler for C supports arguments -mavx2: YES 00:02:36.738 Compiler for C supports arguments -mavx: YES 00:02:36.738 Message: lib/net: Defining dependency "net" 00:02:36.738 Message: lib/meter: Defining dependency "meter" 00:02:36.738 Message: lib/ethdev: Defining dependency "ethdev" 00:02:36.738 Message: lib/pci: Defining dependency "pci" 00:02:36.738 Message: lib/cmdline: Defining dependency "cmdline" 00:02:36.738 Message: lib/hash: Defining dependency "hash" 00:02:36.738 Message: lib/timer: Defining dependency "timer" 00:02:36.738 Message: lib/compressdev: Defining dependency "compressdev" 00:02:36.738 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:36.738 Message: lib/dmadev: Defining dependency "dmadev" 00:02:36.738 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:36.738 Message: lib/reorder: Defining dependency "reorder" 00:02:36.738 Message: lib/security: Defining dependency "security" 00:02:36.738 Has header "linux/userfaultfd.h" : NO 00:02:36.738 Has header "linux/vduse.h" : NO 00:02:36.738 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:02:36.738 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:36.738 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:36.738 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:36.738 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:36.738 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:36.738 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:36.738 Message: Disabling vdpa/* drivers: missing internal dependency "vhost" 00:02:36.738 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:36.738 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:36.738 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:36.738 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:36.739 Configuring doxy-api-html.conf using configuration 00:02:36.739 Configuring doxy-api-man.conf using configuration 00:02:36.739 Program mandb found: NO 00:02:36.739 Program sphinx-build found: NO 00:02:36.739 Configuring rte_build_config.h using configuration 00:02:36.739 Message: 00:02:36.739 ================= 00:02:36.739 Applications Enabled 00:02:36.739 ================= 00:02:36.739 00:02:36.739 apps: 00:02:36.739 00:02:36.739 00:02:36.739 Message: 00:02:36.739 ================= 00:02:36.739 Libraries Enabled 00:02:36.739 ================= 00:02:36.739 00:02:36.739 libs: 00:02:36.739 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:36.739 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:36.739 cryptodev, dmadev, reorder, security, 00:02:36.739 00:02:36.739 Message: 00:02:36.739 =============== 00:02:36.739 Drivers Enabled 00:02:36.739 =============== 00:02:36.739 00:02:36.739 common: 00:02:36.739 00:02:36.739 bus: 00:02:36.739 pci, vdev, 00:02:36.739 mempool: 00:02:36.739 ring, 00:02:36.739 dma: 00:02:36.739 00:02:36.739 net: 00:02:36.739 00:02:36.739 crypto: 00:02:36.739 00:02:36.739 compress: 00:02:36.739 00:02:36.739 00:02:36.739 Message: 00:02:36.739 ================= 00:02:36.739 Content Skipped 00:02:36.739 ================= 00:02:36.739 00:02:36.739 apps: 00:02:36.739 dumpcap: explicitly disabled via build config 00:02:36.739 graph: explicitly disabled via build config 00:02:36.739 pdump: explicitly disabled via build config 00:02:36.739 proc-info: explicitly disabled via build config 00:02:36.739 test-acl: explicitly disabled via build config 00:02:36.739 test-bbdev: explicitly disabled via build config 00:02:36.739 test-cmdline: explicitly disabled via build config 00:02:36.739 test-compress-perf: explicitly disabled via build config 00:02:36.739 test-crypto-perf: explicitly disabled via build config 00:02:36.739 test-dma-perf: explicitly disabled via build config 00:02:36.739 test-eventdev: explicitly disabled via build config 00:02:36.739 test-fib: explicitly disabled via build config 00:02:36.739 test-flow-perf: explicitly disabled via build config 00:02:36.739 test-gpudev: explicitly disabled via build config 00:02:36.739 test-mldev: explicitly disabled via build config 00:02:36.739 test-pipeline: explicitly disabled via build config 00:02:36.739 test-pmd: explicitly disabled via build config 00:02:36.739 test-regex: explicitly disabled via build config 00:02:36.739 test-sad: explicitly disabled via build config 00:02:36.739 test-security-perf: explicitly disabled via build config 00:02:36.739 00:02:36.739 libs: 00:02:36.739 argparse: explicitly disabled via build config 00:02:36.739 metrics: explicitly disabled via build config 00:02:36.739 acl: explicitly disabled via build config 00:02:36.739 bbdev: explicitly disabled via build config 00:02:36.739 bitratestats: explicitly disabled via build config 00:02:36.739 bpf: explicitly disabled via build config 00:02:36.739 cfgfile: explicitly disabled via build config 00:02:36.739 distributor: explicitly disabled via build config 00:02:36.739 efd: explicitly disabled via build config 00:02:36.739 eventdev: explicitly disabled via build config 00:02:36.739 dispatcher: explicitly disabled via build config 00:02:36.739 gpudev: explicitly disabled via build config 00:02:36.739 gro: explicitly disabled via build config 00:02:36.739 gso: explicitly disabled via build config 00:02:36.739 ip_frag: explicitly disabled via build config 00:02:36.739 jobstats: explicitly disabled via build config 00:02:36.739 latencystats: explicitly disabled via build config 00:02:36.739 lpm: explicitly disabled via build config 00:02:36.739 member: explicitly disabled via build config 00:02:36.739 pcapng: explicitly disabled via build config 00:02:36.739 power: only supported on Linux 00:02:36.739 rawdev: explicitly disabled via build config 00:02:36.739 regexdev: explicitly disabled via build config 00:02:36.739 mldev: explicitly disabled via build config 00:02:36.739 rib: explicitly disabled via build config 00:02:36.739 sched: explicitly disabled via build config 00:02:36.739 stack: explicitly disabled via build config 00:02:36.739 vhost: only supported on Linux 00:02:36.739 ipsec: explicitly disabled via build config 00:02:36.739 pdcp: explicitly disabled via build config 00:02:36.739 fib: explicitly disabled via build config 00:02:36.739 port: explicitly disabled via build config 00:02:36.739 pdump: explicitly disabled via build config 00:02:36.739 table: explicitly disabled via build config 00:02:36.739 pipeline: explicitly disabled via build config 00:02:36.739 graph: explicitly disabled via build config 00:02:36.739 node: explicitly disabled via build config 00:02:36.739 00:02:36.739 drivers: 00:02:36.739 common/cpt: not in enabled drivers build config 00:02:36.739 common/dpaax: not in enabled drivers build config 00:02:36.739 common/iavf: not in enabled drivers build config 00:02:36.739 common/idpf: not in enabled drivers build config 00:02:36.739 common/ionic: not in enabled drivers build config 00:02:36.739 common/mvep: not in enabled drivers build config 00:02:36.739 common/octeontx: not in enabled drivers build config 00:02:36.739 bus/auxiliary: not in enabled drivers build config 00:02:36.739 bus/cdx: not in enabled drivers build config 00:02:36.739 bus/dpaa: not in enabled drivers build config 00:02:36.739 bus/fslmc: not in enabled drivers build config 00:02:36.739 bus/ifpga: not in enabled drivers build config 00:02:36.739 bus/platform: not in enabled drivers build config 00:02:36.739 bus/uacce: not in enabled drivers build config 00:02:36.739 bus/vmbus: not in enabled drivers build config 00:02:36.739 common/cnxk: not in enabled drivers build config 00:02:36.739 common/mlx5: not in enabled drivers build config 00:02:36.739 common/nfp: not in enabled drivers build config 00:02:36.739 common/nitrox: not in enabled drivers build config 00:02:36.739 common/qat: not in enabled drivers build config 00:02:36.739 common/sfc_efx: not in enabled drivers build config 00:02:36.739 mempool/bucket: not in enabled drivers build config 00:02:36.739 mempool/cnxk: not in enabled drivers build config 00:02:36.739 mempool/dpaa: not in enabled drivers build config 00:02:36.739 mempool/dpaa2: not in enabled drivers build config 00:02:36.739 mempool/octeontx: not in enabled drivers build config 00:02:36.739 mempool/stack: not in enabled drivers build config 00:02:36.739 dma/cnxk: not in enabled drivers build config 00:02:36.739 dma/dpaa: not in enabled drivers build config 00:02:36.739 dma/dpaa2: not in enabled drivers build config 00:02:36.739 dma/hisilicon: not in enabled drivers build config 00:02:36.739 dma/idxd: not in enabled drivers build config 00:02:36.739 dma/ioat: not in enabled drivers build config 00:02:36.739 dma/skeleton: not in enabled drivers build config 00:02:36.739 net/af_packet: not in enabled drivers build config 00:02:36.739 net/af_xdp: not in enabled drivers build config 00:02:36.739 net/ark: not in enabled drivers build config 00:02:36.739 net/atlantic: not in enabled drivers build config 00:02:36.739 net/avp: not in enabled drivers build config 00:02:36.739 net/axgbe: not in enabled drivers build config 00:02:36.739 net/bnx2x: not in enabled drivers build config 00:02:36.739 net/bnxt: not in enabled drivers build config 00:02:36.739 net/bonding: not in enabled drivers build config 00:02:36.739 net/cnxk: not in enabled drivers build config 00:02:36.739 net/cpfl: not in enabled drivers build config 00:02:36.739 net/cxgbe: not in enabled drivers build config 00:02:36.739 net/dpaa: not in enabled drivers build config 00:02:36.739 net/dpaa2: not in enabled drivers build config 00:02:36.739 net/e1000: not in enabled drivers build config 00:02:36.739 net/ena: not in enabled drivers build config 00:02:36.739 net/enetc: not in enabled drivers build config 00:02:36.739 net/enetfec: not in enabled drivers build config 00:02:36.739 net/enic: not in enabled drivers build config 00:02:36.739 net/failsafe: not in enabled drivers build config 00:02:36.739 net/fm10k: not in enabled drivers build config 00:02:36.739 net/gve: not in enabled drivers build config 00:02:36.739 net/hinic: not in enabled drivers build config 00:02:36.739 net/hns3: not in enabled drivers build config 00:02:36.739 net/i40e: not in enabled drivers build config 00:02:36.739 net/iavf: not in enabled drivers build config 00:02:36.739 net/ice: not in enabled drivers build config 00:02:36.739 net/idpf: not in enabled drivers build config 00:02:36.739 net/igc: not in enabled drivers build config 00:02:36.739 net/ionic: not in enabled drivers build config 00:02:36.739 net/ipn3ke: not in enabled drivers build config 00:02:36.739 net/ixgbe: not in enabled drivers build config 00:02:36.739 net/mana: not in enabled drivers build config 00:02:36.739 net/memif: not in enabled drivers build config 00:02:36.739 net/mlx4: not in enabled drivers build config 00:02:36.739 net/mlx5: not in enabled drivers build config 00:02:36.739 net/mvneta: not in enabled drivers build config 00:02:36.739 net/mvpp2: not in enabled drivers build config 00:02:36.739 net/netvsc: not in enabled drivers build config 00:02:36.739 net/nfb: not in enabled drivers build config 00:02:36.739 net/nfp: not in enabled drivers build config 00:02:36.739 net/ngbe: not in enabled drivers build config 00:02:36.739 net/null: not in enabled drivers build config 00:02:36.739 net/octeontx: not in enabled drivers build config 00:02:36.739 net/octeon_ep: not in enabled drivers build config 00:02:36.739 net/pcap: not in enabled drivers build config 00:02:36.739 net/pfe: not in enabled drivers build config 00:02:36.739 net/qede: not in enabled drivers build config 00:02:36.739 net/ring: not in enabled drivers build config 00:02:36.739 net/sfc: not in enabled drivers build config 00:02:36.739 net/softnic: not in enabled drivers build config 00:02:36.739 net/tap: not in enabled drivers build config 00:02:36.739 net/thunderx: not in enabled drivers build config 00:02:36.739 net/txgbe: not in enabled drivers build config 00:02:36.739 net/vdev_netvsc: not in enabled drivers build config 00:02:36.739 net/vhost: not in enabled drivers build config 00:02:36.739 net/virtio: not in enabled drivers build config 00:02:36.739 net/vmxnet3: not in enabled drivers build config 00:02:36.739 raw/*: missing internal dependency, "rawdev" 00:02:36.739 crypto/armv8: not in enabled drivers build config 00:02:36.739 crypto/bcmfs: not in enabled drivers build config 00:02:36.739 crypto/caam_jr: not in enabled drivers build config 00:02:36.739 crypto/ccp: not in enabled drivers build config 00:02:36.739 crypto/cnxk: not in enabled drivers build config 00:02:36.739 crypto/dpaa_sec: not in enabled drivers build config 00:02:36.739 crypto/dpaa2_sec: not in enabled drivers build config 00:02:36.739 crypto/ipsec_mb: not in enabled drivers build config 00:02:36.739 crypto/mlx5: not in enabled drivers build config 00:02:36.739 crypto/mvsam: not in enabled drivers build config 00:02:36.739 crypto/nitrox: not in enabled drivers build config 00:02:36.739 crypto/null: not in enabled drivers build config 00:02:36.739 crypto/octeontx: not in enabled drivers build config 00:02:36.739 crypto/openssl: not in enabled drivers build config 00:02:36.740 crypto/scheduler: not in enabled drivers build config 00:02:36.740 crypto/uadk: not in enabled drivers build config 00:02:36.740 crypto/virtio: not in enabled drivers build config 00:02:36.740 compress/isal: not in enabled drivers build config 00:02:36.740 compress/mlx5: not in enabled drivers build config 00:02:36.740 compress/nitrox: not in enabled drivers build config 00:02:36.740 compress/octeontx: not in enabled drivers build config 00:02:36.740 compress/zlib: not in enabled drivers build config 00:02:36.740 regex/*: missing internal dependency, "regexdev" 00:02:36.740 ml/*: missing internal dependency, "mldev" 00:02:36.740 vdpa/*: missing internal dependency, "vhost" 00:02:36.740 event/*: missing internal dependency, "eventdev" 00:02:36.740 baseband/*: missing internal dependency, "bbdev" 00:02:36.740 gpu/*: missing internal dependency, "gpudev" 00:02:36.740 00:02:36.740 00:02:36.740 Build targets in project: 81 00:02:36.740 00:02:36.740 DPDK 24.03.0 00:02:36.740 00:02:36.740 User defined options 00:02:36.740 buildtype : debug 00:02:36.740 default_library : static 00:02:36.740 libdir : lib 00:02:36.740 prefix : / 00:02:36.740 c_args : -fPIC -Werror 00:02:36.740 c_link_args : 00:02:36.740 cpu_instruction_set: native 00:02:36.740 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:36.740 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:36.740 enable_docs : false 00:02:36.740 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:36.740 enable_kmods : true 00:02:36.740 max_lcores : 128 00:02:36.740 tests : false 00:02:36.740 00:02:36.740 Found ninja-1.11.1 at /usr/local/bin/ninja 00:02:36.998 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:36.998 [1/233] Compiling C object lib/librte_log.a.p/log_log_freebsd.c.o 00:02:36.998 [2/233] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:36.998 [3/233] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:36.998 [4/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:36.998 [5/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:36.998 [6/233] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:37.255 [7/233] Linking static target lib/librte_kvargs.a 00:02:37.255 [8/233] Linking static target lib/librte_log.a 00:02:37.514 [9/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:37.514 [10/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:37.514 [11/233] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:37.514 [12/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:37.514 [13/233] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:37.514 [14/233] Linking static target lib/librte_telemetry.a 00:02:37.514 [15/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:37.514 [16/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:37.514 [17/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:37.773 [18/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:37.773 [19/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:37.773 [20/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:38.031 [21/233] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.031 [22/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:38.031 [23/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:38.031 [24/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:38.031 [25/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:38.031 [26/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:38.031 [27/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:38.031 [28/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:38.031 [29/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:38.289 [30/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:38.289 [31/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:38.289 [32/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:38.289 [33/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:38.289 [34/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:38.289 [35/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:38.547 [36/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:38.547 [37/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:38.547 [38/233] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:38.547 [39/233] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:38.547 [40/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:38.547 [41/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:38.547 [42/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:38.805 [43/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:38.805 [44/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:38.805 [45/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:38.805 [46/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:38.805 [47/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:39.063 [48/233] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:39.063 [49/233] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:39.063 [50/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:39.063 [51/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:39.063 [52/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:39.063 [53/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_cpuflags.c.o 00:02:39.063 [54/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:39.321 [55/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:39.321 [56/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:39.321 [57/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:39.321 [58/233] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:39.579 [59/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal.c.o 00:02:39.579 [60/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_alarm.c.o 00:02:39.579 [61/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:39.579 [62/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:39.579 [63/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_dev.c.o 00:02:39.579 [64/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:39.579 [65/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_hugepage_info.c.o 00:02:39.579 [66/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_lcore.c.o 00:02:39.579 [67/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_interrupts.c.o 00:02:39.837 [68/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memory.c.o 00:02:39.837 [69/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_thread.c.o 00:02:39.837 [70/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memalloc.c.o 00:02:39.837 [71/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_timer.c.o 00:02:40.096 [72/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:40.096 [73/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:40.096 [74/233] Linking static target lib/librte_eal.a 00:02:40.096 [75/233] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:40.096 [76/233] Linking static target lib/librte_ring.a 00:02:40.096 [77/233] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.096 [78/233] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:40.354 [79/233] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:40.354 [80/233] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:40.354 [81/233] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:40.354 [82/233] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:40.354 [83/233] Linking static target lib/librte_rcu.a 00:02:40.354 [84/233] Linking static target lib/librte_mempool.a 00:02:40.354 [85/233] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.354 [86/233] Linking target lib/librte_log.so.24.1 00:02:40.354 [87/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:40.354 [88/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:40.354 [89/233] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:40.612 [90/233] Linking target lib/librte_kvargs.so.24.1 00:02:40.612 [91/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:40.612 [92/233] Linking target lib/librte_telemetry.so.24.1 00:02:40.612 [93/233] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:40.612 [94/233] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.612 [95/233] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.612 [96/233] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:40.612 [97/233] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:40.612 [98/233] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:40.870 [99/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:40.871 [100/233] Linking static target lib/librte_mbuf.a 00:02:40.871 [101/233] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:40.871 [102/233] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:40.871 [103/233] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:41.129 [104/233] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:41.129 [105/233] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:41.129 [106/233] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:41.129 [107/233] Linking static target lib/librte_net.a 00:02:41.129 [108/233] Linking static target lib/librte_meter.a 00:02:41.391 [109/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:41.391 [110/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:41.391 [111/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:41.391 [112/233] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.391 [113/233] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.391 [114/233] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.650 [115/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:41.909 [116/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:41.909 [117/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:41.909 [118/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:41.909 [119/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:41.909 [120/233] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:41.909 [121/233] Linking static target lib/librte_pci.a 00:02:42.168 [122/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:42.168 [123/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:42.168 [124/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:42.168 [125/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:42.168 [126/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:42.168 [127/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:42.168 [128/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:42.168 [129/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:42.168 [130/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:42.168 [131/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:42.168 [132/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:42.168 [133/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:42.168 [134/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:42.427 [135/233] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:42.427 [136/233] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.427 [137/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:42.427 [138/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:42.427 [139/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:42.427 [140/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:42.427 [141/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:42.427 [142/233] Linking static target lib/librte_ethdev.a 00:02:42.684 [143/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:42.684 [144/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:42.684 [145/233] Linking static target lib/librte_cmdline.a 00:02:42.684 [146/233] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.684 [147/233] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:42.684 [148/233] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:42.942 [149/233] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:42.942 [150/233] Linking static target lib/librte_timer.a 00:02:42.942 [151/233] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:42.942 [152/233] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:42.942 [153/233] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:42.942 [154/233] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:42.942 [155/233] Linking static target lib/librte_hash.a 00:02:43.201 [156/233] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:43.201 [157/233] Linking static target lib/librte_compressdev.a 00:02:43.201 [158/233] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.459 [159/233] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:43.459 [160/233] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:43.459 [161/233] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:43.459 [162/233] Linking static target lib/librte_dmadev.a 00:02:43.459 [163/233] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:43.459 [164/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:43.718 [165/233] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:43.718 [166/233] Linking static target lib/librte_reorder.a 00:02:43.718 [167/233] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.718 [168/233] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.718 [169/233] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:43.718 [170/233] Linking static target lib/librte_cryptodev.a 00:02:43.718 [171/233] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:43.718 [172/233] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.976 [173/233] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.976 [174/233] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:43.976 [175/233] Linking static target lib/librte_security.a 00:02:43.976 [176/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:43.976 [177/233] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.976 [178/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:43.976 [179/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_bsd_pci.c.o 00:02:43.976 [180/233] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:44.234 [181/233] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.234 [182/233] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:44.234 [183/233] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:44.234 [184/233] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:44.234 [185/233] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:44.234 [186/233] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:44.234 [187/233] Linking static target drivers/librte_bus_pci.a 00:02:44.492 [188/233] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:44.492 [189/233] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:44.492 [190/233] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:44.492 [191/233] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:44.492 [192/233] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:44.492 [193/233] Linking static target drivers/librte_bus_vdev.a 00:02:44.492 [194/233] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.749 [195/233] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.749 [196/233] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:44.749 [197/233] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:44.749 [198/233] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.749 [199/233] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:44.749 [200/233] Linking static target drivers/librte_mempool_ring.a 00:02:45.316 [201/233] Generating kernel/freebsd/contigmem with a custom command 00:02:45.316 machine -> /usr/src/sys/amd64/include 00:02:45.316 x86 -> /usr/src/sys/x86/include 00:02:45.316 i386 -> /usr/src/sys/i386/include 00:02:45.316 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/device_if.m -h 00:02:45.316 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/bus_if.m -h 00:02:45.316 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/dev/pci/pci_if.m -h 00:02:45.316 touch opt_global.h 00:02:45.316 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/home/vagrant/spdk_repo/spdk/dpdk/config -include /home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -fdebug-prefix-map=./i386=/usr/src/sys/i386/include -MD -MF.depend.contigmem.o -MTcontigmem.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-format-zero-length -mno-aes -mno-avx -std=gnu99 -c /home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/contigmem/contigmem.c -o contigmem.o 00:02:45.316 ld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o contigmem.ko contigmem.o 00:02:45.316 :> export_syms 00:02:45.316 awk -f /usr/src/sys/conf/kmod_syms.awk contigmem.ko export_syms | xargs -J% objcopy % contigmem.ko 00:02:45.316 objcopy --strip-debug contigmem.ko 00:02:45.574 [202/233] Generating kernel/freebsd/nic_uio with a custom command 00:02:45.574 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/home/vagrant/spdk_repo/spdk/dpdk/config -include /home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -fdebug-prefix-map=./i386=/usr/src/sys/i386/include -MD -MF.depend.nic_uio.o -MTnic_uio.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-format-zero-length -mno-aes -mno-avx -std=gnu99 -c /home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/nic_uio/nic_uio.c -o nic_uio.o 00:02:45.575 ld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o nic_uio.ko nic_uio.o 00:02:45.575 :> export_syms 00:02:45.575 awk -f /usr/src/sys/conf/kmod_syms.awk nic_uio.ko export_syms | xargs -J% objcopy % nic_uio.ko 00:02:45.575 objcopy --strip-debug nic_uio.ko 00:02:48.105 [203/233] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.659 [204/233] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.659 [205/233] Linking target lib/librte_eal.so.24.1 00:02:50.916 [206/233] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:50.916 [207/233] Linking target lib/librte_dmadev.so.24.1 00:02:50.916 [208/233] Linking target lib/librte_meter.so.24.1 00:02:50.916 [209/233] Linking target lib/librte_timer.so.24.1 00:02:50.916 [210/233] Linking target lib/librte_pci.so.24.1 00:02:50.916 [211/233] Linking target lib/librte_ring.so.24.1 00:02:50.916 [212/233] Linking target drivers/librte_bus_vdev.so.24.1 00:02:50.916 [213/233] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:50.916 [214/233] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:50.916 [215/233] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:50.916 [216/233] Linking target lib/librte_mempool.so.24.1 00:02:50.916 [217/233] Linking target drivers/librte_bus_pci.so.24.1 00:02:50.916 [218/233] Linking target lib/librte_rcu.so.24.1 00:02:51.172 [219/233] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:51.172 [220/233] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:51.172 [221/233] Linking target drivers/librte_mempool_ring.so.24.1 00:02:51.172 [222/233] Linking target lib/librte_mbuf.so.24.1 00:02:51.172 [223/233] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:51.431 [224/233] Linking target lib/librte_compressdev.so.24.1 00:02:51.431 [225/233] Linking target lib/librte_cryptodev.so.24.1 00:02:51.431 [226/233] Linking target lib/librte_reorder.so.24.1 00:02:51.431 [227/233] Linking target lib/librte_net.so.24.1 00:02:51.431 [228/233] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:51.431 [229/233] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:51.431 [230/233] Linking target lib/librte_hash.so.24.1 00:02:51.431 [231/233] Linking target lib/librte_security.so.24.1 00:02:51.431 [232/233] Linking target lib/librte_ethdev.so.24.1 00:02:51.431 [233/233] Linking target lib/librte_cmdline.so.24.1 00:02:51.431 INFO: autodetecting backend as ninja 00:02:51.431 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:52.362 CC lib/log/log.o 00:02:52.362 CC lib/log/log_flags.o 00:02:52.362 CC lib/ut/ut.o 00:02:52.362 CC lib/log/log_deprecated.o 00:02:52.362 CC lib/ut_mock/mock.o 00:02:52.362 LIB libspdk_ut_mock.a 00:02:52.362 LIB libspdk_ut.a 00:02:52.362 LIB libspdk_log.a 00:02:52.622 CC lib/dma/dma.o 00:02:52.622 CXX lib/trace_parser/trace.o 00:02:52.622 CC lib/ioat/ioat.o 00:02:52.622 CC lib/util/base64.o 00:02:52.622 CC lib/util/bit_array.o 00:02:52.622 CC lib/util/cpuset.o 00:02:52.622 CC lib/util/crc16.o 00:02:52.622 CC lib/util/crc32.o 00:02:52.622 CC lib/util/crc32c.o 00:02:52.622 CC lib/util/crc32_ieee.o 00:02:52.622 CC lib/util/crc64.o 00:02:52.622 CC lib/util/dif.o 00:02:52.622 CC lib/util/fd.o 00:02:52.622 CC lib/util/file.o 00:02:52.622 LIB libspdk_dma.a 00:02:52.622 CC lib/util/hexlify.o 00:02:52.622 CC lib/util/iov.o 00:02:52.622 CC lib/util/math.o 00:02:52.622 CC lib/util/pipe.o 00:02:52.622 LIB libspdk_ioat.a 00:02:52.622 CC lib/util/strerror_tls.o 00:02:52.622 CC lib/util/string.o 00:02:52.622 CC lib/util/uuid.o 00:02:52.622 CC lib/util/fd_group.o 00:02:52.879 CC lib/util/xor.o 00:02:52.879 CC lib/util/zipf.o 00:02:52.879 LIB libspdk_util.a 00:02:52.879 CC lib/rdma_provider/common.o 00:02:52.879 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:52.879 CC lib/json/json_parse.o 00:02:52.879 CC lib/rdma_utils/rdma_utils.o 00:02:52.879 CC lib/json/json_util.o 00:02:52.879 CC lib/idxd/idxd.o 00:02:52.879 CC lib/env_dpdk/env.o 00:02:52.879 CC lib/vmd/vmd.o 00:02:52.879 CC lib/conf/conf.o 00:02:53.136 CC lib/idxd/idxd_user.o 00:02:53.136 LIB libspdk_rdma_provider.a 00:02:53.136 CC lib/env_dpdk/memory.o 00:02:53.136 CC lib/json/json_write.o 00:02:53.136 CC lib/env_dpdk/pci.o 00:02:53.136 LIB libspdk_rdma_utils.a 00:02:53.136 LIB libspdk_conf.a 00:02:53.136 CC lib/vmd/led.o 00:02:53.136 CC lib/env_dpdk/init.o 00:02:53.136 CC lib/env_dpdk/threads.o 00:02:53.136 LIB libspdk_idxd.a 00:02:53.136 CC lib/env_dpdk/pci_ioat.o 00:02:53.136 CC lib/env_dpdk/pci_virtio.o 00:02:53.136 LIB libspdk_vmd.a 00:02:53.136 CC lib/env_dpdk/pci_vmd.o 00:02:53.136 CC lib/env_dpdk/pci_idxd.o 00:02:53.136 CC lib/env_dpdk/pci_event.o 00:02:53.136 CC lib/env_dpdk/sigbus_handler.o 00:02:53.136 LIB libspdk_json.a 00:02:53.393 CC lib/env_dpdk/pci_dpdk.o 00:02:53.393 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:53.393 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:53.393 CC lib/jsonrpc/jsonrpc_server.o 00:02:53.393 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:53.393 CC lib/jsonrpc/jsonrpc_client.o 00:02:53.393 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:53.393 LIB libspdk_jsonrpc.a 00:02:53.650 CC lib/rpc/rpc.o 00:02:53.650 LIB libspdk_rpc.a 00:02:53.650 CC lib/notify/notify.o 00:02:53.650 CC lib/trace/trace.o 00:02:53.650 CC lib/trace/trace_flags.o 00:02:53.650 CC lib/trace/trace_rpc.o 00:02:53.651 CC lib/notify/notify_rpc.o 00:02:53.651 CC lib/keyring/keyring_rpc.o 00:02:53.651 CC lib/keyring/keyring.o 00:02:53.907 LIB libspdk_env_dpdk.a 00:02:53.907 LIB libspdk_notify.a 00:02:53.907 LIB libspdk_keyring.a 00:02:53.907 LIB libspdk_trace.a 00:02:53.907 CC lib/thread/thread.o 00:02:53.907 CC lib/thread/iobuf.o 00:02:53.907 CC lib/sock/sock.o 00:02:53.907 CC lib/sock/sock_rpc.o 00:02:54.163 LIB libspdk_trace_parser.a 00:02:54.163 LIB libspdk_sock.a 00:02:54.163 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:54.163 CC lib/nvme/nvme_ctrlr.o 00:02:54.163 CC lib/nvme/nvme_fabric.o 00:02:54.163 CC lib/nvme/nvme_ns.o 00:02:54.163 CC lib/nvme/nvme_ns_cmd.o 00:02:54.163 CC lib/nvme/nvme_pcie_common.o 00:02:54.163 CC lib/nvme/nvme_pcie.o 00:02:54.163 CC lib/nvme/nvme_qpair.o 00:02:54.163 CC lib/nvme/nvme.o 00:02:54.163 LIB libspdk_thread.a 00:02:54.163 CC lib/nvme/nvme_quirks.o 00:02:54.728 CC lib/nvme/nvme_transport.o 00:02:54.728 CC lib/nvme/nvme_discovery.o 00:02:54.728 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:54.728 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:54.728 CC lib/accel/accel.o 00:02:54.728 CC lib/blob/blobstore.o 00:02:54.728 CC lib/init/json_config.o 00:02:54.728 CC lib/accel/accel_rpc.o 00:02:54.985 CC lib/init/subsystem.o 00:02:54.985 CC lib/accel/accel_sw.o 00:02:54.985 CC lib/init/subsystem_rpc.o 00:02:54.985 CC lib/blob/request.o 00:02:54.985 CC lib/init/rpc.o 00:02:54.985 CC lib/nvme/nvme_tcp.o 00:02:54.985 CC lib/blob/zeroes.o 00:02:54.986 LIB libspdk_accel.a 00:02:54.986 CC lib/nvme/nvme_opal.o 00:02:54.986 LIB libspdk_init.a 00:02:54.986 CC lib/nvme/nvme_io_msg.o 00:02:54.986 CC lib/blob/blob_bs_dev.o 00:02:55.243 CC lib/bdev/bdev.o 00:02:55.243 CC lib/nvme/nvme_poll_group.o 00:02:55.243 CC lib/bdev/bdev_rpc.o 00:02:55.243 CC lib/bdev/bdev_zone.o 00:02:55.243 CC lib/event/app.o 00:02:55.243 CC lib/bdev/part.o 00:02:55.243 CC lib/event/reactor.o 00:02:55.500 LIB libspdk_blob.a 00:02:55.500 CC lib/nvme/nvme_zns.o 00:02:55.500 CC lib/bdev/scsi_nvme.o 00:02:55.500 CC lib/nvme/nvme_stubs.o 00:02:55.500 CC lib/event/log_rpc.o 00:02:55.500 CC lib/nvme/nvme_auth.o 00:02:55.500 CC lib/blobfs/blobfs.o 00:02:55.500 CC lib/event/app_rpc.o 00:02:55.500 CC lib/blobfs/tree.o 00:02:55.500 CC lib/event/scheduler_static.o 00:02:55.758 CC lib/lvol/lvol.o 00:02:55.758 LIB libspdk_event.a 00:02:55.758 CC lib/nvme/nvme_rdma.o 00:02:55.758 LIB libspdk_blobfs.a 00:02:55.758 LIB libspdk_bdev.a 00:02:55.758 CC lib/scsi/dev.o 00:02:55.758 CC lib/scsi/lun.o 00:02:55.758 CC lib/scsi/port.o 00:02:55.758 CC lib/scsi/scsi.o 00:02:55.758 CC lib/scsi/scsi_bdev.o 00:02:55.758 LIB libspdk_lvol.a 00:02:56.016 CC lib/scsi/scsi_pr.o 00:02:56.016 CC lib/scsi/scsi_rpc.o 00:02:56.016 CC lib/scsi/task.o 00:02:56.016 LIB libspdk_scsi.a 00:02:56.297 CC lib/iscsi/conn.o 00:02:56.297 CC lib/iscsi/init_grp.o 00:02:56.297 CC lib/iscsi/iscsi.o 00:02:56.297 CC lib/iscsi/md5.o 00:02:56.297 CC lib/iscsi/param.o 00:02:56.298 CC lib/iscsi/portal_grp.o 00:02:56.298 CC lib/iscsi/tgt_node.o 00:02:56.298 CC lib/iscsi/iscsi_subsystem.o 00:02:56.298 CC lib/iscsi/iscsi_rpc.o 00:02:56.298 CC lib/iscsi/task.o 00:02:56.298 LIB libspdk_nvme.a 00:02:56.556 CC lib/nvmf/ctrlr.o 00:02:56.556 CC lib/nvmf/ctrlr_discovery.o 00:02:56.556 CC lib/nvmf/ctrlr_bdev.o 00:02:56.556 CC lib/nvmf/subsystem.o 00:02:56.556 CC lib/nvmf/nvmf.o 00:02:56.556 CC lib/nvmf/nvmf_rpc.o 00:02:56.556 CC lib/nvmf/transport.o 00:02:56.556 CC lib/nvmf/tcp.o 00:02:56.556 CC lib/nvmf/stubs.o 00:02:56.556 CC lib/nvmf/mdns_server.o 00:02:56.556 LIB libspdk_iscsi.a 00:02:56.556 CC lib/nvmf/rdma.o 00:02:56.556 CC lib/nvmf/auth.o 00:02:57.122 LIB libspdk_nvmf.a 00:02:57.122 CC module/env_dpdk/env_dpdk_rpc.o 00:02:57.122 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:57.122 CC module/accel/error/accel_error.o 00:02:57.122 CC module/accel/error/accel_error_rpc.o 00:02:57.122 CC module/accel/iaa/accel_iaa.o 00:02:57.122 CC module/sock/posix/posix.o 00:02:57.122 CC module/blob/bdev/blob_bdev.o 00:02:57.122 CC module/keyring/file/keyring.o 00:02:57.122 CC module/accel/ioat/accel_ioat.o 00:02:57.122 CC module/accel/dsa/accel_dsa.o 00:02:57.122 LIB libspdk_env_dpdk_rpc.a 00:02:57.122 CC module/keyring/file/keyring_rpc.o 00:02:57.122 CC module/accel/iaa/accel_iaa_rpc.o 00:02:57.379 LIB libspdk_accel_error.a 00:02:57.379 LIB libspdk_scheduler_dynamic.a 00:02:57.379 CC module/accel/ioat/accel_ioat_rpc.o 00:02:57.379 CC module/accel/dsa/accel_dsa_rpc.o 00:02:57.379 LIB libspdk_blob_bdev.a 00:02:57.379 LIB libspdk_keyring_file.a 00:02:57.379 LIB libspdk_accel_iaa.a 00:02:57.379 LIB libspdk_accel_dsa.a 00:02:57.379 LIB libspdk_accel_ioat.a 00:02:57.379 CC module/blobfs/bdev/blobfs_bdev.o 00:02:57.379 CC module/bdev/lvol/vbdev_lvol.o 00:02:57.379 CC module/bdev/gpt/gpt.o 00:02:57.379 CC module/bdev/nvme/bdev_nvme.o 00:02:57.379 CC module/bdev/delay/vbdev_delay.o 00:02:57.379 CC module/bdev/passthru/vbdev_passthru.o 00:02:57.379 CC module/bdev/null/bdev_null.o 00:02:57.379 CC module/bdev/error/vbdev_error.o 00:02:57.379 CC module/bdev/malloc/bdev_malloc.o 00:02:57.379 LIB libspdk_sock_posix.a 00:02:57.379 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:57.646 CC module/bdev/gpt/vbdev_gpt.o 00:02:57.646 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:57.646 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:57.646 CC module/bdev/null/bdev_null_rpc.o 00:02:57.646 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:57.646 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:57.646 CC module/bdev/error/vbdev_error_rpc.o 00:02:57.646 LIB libspdk_bdev_passthru.a 00:02:57.646 LIB libspdk_blobfs_bdev.a 00:02:57.646 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:57.646 LIB libspdk_bdev_gpt.a 00:02:57.646 CC module/bdev/nvme/nvme_rpc.o 00:02:57.646 LIB libspdk_bdev_delay.a 00:02:57.646 CC module/bdev/nvme/bdev_mdns_client.o 00:02:57.646 CC module/bdev/raid/bdev_raid.o 00:02:57.646 LIB libspdk_bdev_error.a 00:02:57.646 LIB libspdk_bdev_malloc.a 00:02:57.646 LIB libspdk_bdev_null.a 00:02:57.646 CC module/bdev/raid/bdev_raid_rpc.o 00:02:57.646 CC module/bdev/raid/bdev_raid_sb.o 00:02:57.646 CC module/bdev/split/vbdev_split.o 00:02:57.646 CC module/bdev/raid/raid0.o 00:02:57.903 LIB libspdk_bdev_lvol.a 00:02:57.903 CC module/bdev/split/vbdev_split_rpc.o 00:02:57.903 CC module/bdev/raid/raid1.o 00:02:57.903 CC module/bdev/raid/concat.o 00:02:57.903 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:57.903 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:57.903 LIB libspdk_bdev_split.a 00:02:57.903 CC module/bdev/aio/bdev_aio.o 00:02:57.903 CC module/bdev/aio/bdev_aio_rpc.o 00:02:57.903 LIB libspdk_bdev_nvme.a 00:02:57.903 LIB libspdk_bdev_raid.a 00:02:57.903 LIB libspdk_bdev_zone_block.a 00:02:58.161 LIB libspdk_bdev_aio.a 00:02:58.161 CC module/event/subsystems/vmd/vmd.o 00:02:58.161 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:58.161 CC module/event/subsystems/iobuf/iobuf.o 00:02:58.161 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:58.161 CC module/event/subsystems/sock/sock.o 00:02:58.161 CC module/event/subsystems/keyring/keyring.o 00:02:58.161 CC module/event/subsystems/scheduler/scheduler.o 00:02:58.419 LIB libspdk_event_vmd.a 00:02:58.419 LIB libspdk_event_keyring.a 00:02:58.419 LIB libspdk_event_sock.a 00:02:58.419 LIB libspdk_event_scheduler.a 00:02:58.419 LIB libspdk_event_iobuf.a 00:02:58.419 CC module/event/subsystems/accel/accel.o 00:02:58.678 LIB libspdk_event_accel.a 00:02:58.678 CC module/event/subsystems/bdev/bdev.o 00:02:58.678 LIB libspdk_event_bdev.a 00:02:58.935 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:58.935 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:58.935 CC module/event/subsystems/scsi/scsi.o 00:02:58.935 LIB libspdk_event_scsi.a 00:02:58.935 LIB libspdk_event_nvmf.a 00:02:59.193 CC module/event/subsystems/iscsi/iscsi.o 00:02:59.193 LIB libspdk_event_iscsi.a 00:02:59.451 CC app/trace_record/trace_record.o 00:02:59.451 CXX app/trace/trace.o 00:02:59.451 CC app/spdk_nvme_perf/perf.o 00:02:59.451 CC app/spdk_lspci/spdk_lspci.o 00:02:59.451 CC examples/ioat/perf/perf.o 00:02:59.451 CC examples/util/zipf/zipf.o 00:02:59.451 CC app/spdk_tgt/spdk_tgt.o 00:02:59.451 CC app/nvmf_tgt/nvmf_main.o 00:02:59.451 CC app/iscsi_tgt/iscsi_tgt.o 00:02:59.451 CC test/thread/poller_perf/poller_perf.o 00:02:59.451 LINK spdk_trace_record 00:02:59.451 LINK zipf 00:02:59.451 LINK spdk_lspci 00:02:59.451 LINK ioat_perf 00:02:59.451 LINK nvmf_tgt 00:02:59.451 LINK poller_perf 00:02:59.451 LINK spdk_tgt 00:02:59.451 CC examples/ioat/verify/verify.o 00:02:59.451 CC test/thread/lock/spdk_lock.o 00:02:59.451 LINK iscsi_tgt 00:02:59.451 CC app/spdk_nvme_identify/identify.o 00:02:59.710 LINK spdk_nvme_perf 00:02:59.710 LINK verify 00:02:59.710 CC examples/thread/thread/thread_ex.o 00:02:59.710 CC test/dma/test_dma/test_dma.o 00:02:59.710 CC examples/sock/hello_world/hello_sock.o 00:02:59.710 TEST_HEADER include/spdk/accel.h 00:02:59.710 TEST_HEADER include/spdk/accel_module.h 00:02:59.710 TEST_HEADER include/spdk/assert.h 00:02:59.710 TEST_HEADER include/spdk/barrier.h 00:02:59.710 CC test/app/bdev_svc/bdev_svc.o 00:02:59.710 TEST_HEADER include/spdk/base64.h 00:02:59.710 TEST_HEADER include/spdk/bdev.h 00:02:59.710 TEST_HEADER include/spdk/bdev_module.h 00:02:59.710 TEST_HEADER include/spdk/bdev_zone.h 00:02:59.710 TEST_HEADER include/spdk/bit_array.h 00:02:59.710 TEST_HEADER include/spdk/bit_pool.h 00:02:59.710 CC examples/idxd/perf/perf.o 00:02:59.710 TEST_HEADER include/spdk/blob.h 00:02:59.710 TEST_HEADER include/spdk/blob_bdev.h 00:02:59.710 TEST_HEADER include/spdk/blobfs.h 00:02:59.710 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:59.710 CC examples/vmd/lsvmd/lsvmd.o 00:02:59.710 TEST_HEADER include/spdk/conf.h 00:02:59.710 TEST_HEADER include/spdk/config.h 00:02:59.710 TEST_HEADER include/spdk/cpuset.h 00:02:59.710 TEST_HEADER include/spdk/crc16.h 00:02:59.710 TEST_HEADER include/spdk/crc32.h 00:02:59.710 TEST_HEADER include/spdk/crc64.h 00:02:59.710 TEST_HEADER include/spdk/dif.h 00:02:59.710 TEST_HEADER include/spdk/dma.h 00:02:59.710 TEST_HEADER include/spdk/endian.h 00:02:59.710 TEST_HEADER include/spdk/env.h 00:02:59.710 TEST_HEADER include/spdk/env_dpdk.h 00:02:59.710 TEST_HEADER include/spdk/event.h 00:02:59.710 TEST_HEADER include/spdk/fd.h 00:02:59.710 TEST_HEADER include/spdk/fd_group.h 00:02:59.710 TEST_HEADER include/spdk/file.h 00:02:59.710 TEST_HEADER include/spdk/ftl.h 00:02:59.710 TEST_HEADER include/spdk/gpt_spec.h 00:02:59.710 LINK thread 00:02:59.710 TEST_HEADER include/spdk/hexlify.h 00:02:59.710 TEST_HEADER include/spdk/histogram_data.h 00:02:59.710 TEST_HEADER include/spdk/idxd.h 00:02:59.710 TEST_HEADER include/spdk/idxd_spec.h 00:02:59.710 TEST_HEADER include/spdk/init.h 00:02:59.710 TEST_HEADER include/spdk/ioat.h 00:02:59.710 TEST_HEADER include/spdk/ioat_spec.h 00:02:59.710 TEST_HEADER include/spdk/iscsi_spec.h 00:02:59.710 TEST_HEADER include/spdk/json.h 00:02:59.710 TEST_HEADER include/spdk/jsonrpc.h 00:02:59.710 TEST_HEADER include/spdk/keyring.h 00:02:59.710 TEST_HEADER include/spdk/keyring_module.h 00:02:59.710 TEST_HEADER include/spdk/likely.h 00:02:59.710 TEST_HEADER include/spdk/log.h 00:02:59.710 TEST_HEADER include/spdk/lvol.h 00:02:59.710 LINK spdk_nvme_identify 00:02:59.710 TEST_HEADER include/spdk/memory.h 00:02:59.710 TEST_HEADER include/spdk/mmio.h 00:02:59.710 TEST_HEADER include/spdk/nbd.h 00:02:59.710 TEST_HEADER include/spdk/notify.h 00:02:59.710 TEST_HEADER include/spdk/nvme.h 00:02:59.710 TEST_HEADER include/spdk/nvme_intel.h 00:02:59.710 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:59.710 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:59.710 TEST_HEADER include/spdk/nvme_spec.h 00:02:59.710 TEST_HEADER include/spdk/nvme_zns.h 00:02:59.969 TEST_HEADER include/spdk/nvmf.h 00:02:59.969 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:59.969 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:59.969 TEST_HEADER include/spdk/nvmf_spec.h 00:02:59.969 TEST_HEADER include/spdk/nvmf_transport.h 00:02:59.969 TEST_HEADER include/spdk/opal.h 00:02:59.969 LINK hello_sock 00:02:59.969 TEST_HEADER include/spdk/opal_spec.h 00:02:59.969 TEST_HEADER include/spdk/pci_ids.h 00:02:59.969 TEST_HEADER include/spdk/pipe.h 00:02:59.969 TEST_HEADER include/spdk/queue.h 00:02:59.969 TEST_HEADER include/spdk/reduce.h 00:02:59.969 TEST_HEADER include/spdk/rpc.h 00:02:59.969 TEST_HEADER include/spdk/scheduler.h 00:02:59.969 TEST_HEADER include/spdk/scsi.h 00:02:59.969 TEST_HEADER include/spdk/scsi_spec.h 00:02:59.969 TEST_HEADER include/spdk/sock.h 00:02:59.969 TEST_HEADER include/spdk/stdinc.h 00:02:59.969 TEST_HEADER include/spdk/string.h 00:02:59.969 TEST_HEADER include/spdk/thread.h 00:02:59.969 TEST_HEADER include/spdk/trace.h 00:02:59.969 TEST_HEADER include/spdk/trace_parser.h 00:02:59.969 TEST_HEADER include/spdk/tree.h 00:02:59.969 TEST_HEADER include/spdk/ublk.h 00:02:59.969 TEST_HEADER include/spdk/util.h 00:02:59.969 LINK bdev_svc 00:02:59.969 TEST_HEADER include/spdk/uuid.h 00:02:59.969 LINK lsvmd 00:02:59.969 TEST_HEADER include/spdk/version.h 00:02:59.969 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:59.969 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:59.969 TEST_HEADER include/spdk/vhost.h 00:02:59.969 TEST_HEADER include/spdk/vmd.h 00:02:59.969 LINK test_dma 00:02:59.969 TEST_HEADER include/spdk/xor.h 00:02:59.969 TEST_HEADER include/spdk/zipf.h 00:02:59.969 CXX test/cpp_headers/accel.o 00:02:59.969 LINK spdk_lock 00:02:59.969 LINK idxd_perf 00:02:59.969 CC app/spdk_nvme_discover/discovery_aer.o 00:02:59.969 CC test/app/histogram_perf/histogram_perf.o 00:02:59.969 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:59.969 CXX test/cpp_headers/accel_module.o 00:02:59.969 CC test/app/jsoncat/jsoncat.o 00:02:59.969 CC examples/vmd/led/led.o 00:02:59.969 CC app/spdk_top/spdk_top.o 00:02:59.969 LINK histogram_perf 00:02:59.969 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:59.969 LINK spdk_nvme_discover 00:02:59.969 LINK led 00:02:59.969 LINK jsoncat 00:03:00.227 CXX test/cpp_headers/assert.o 00:03:00.227 CC test/env/mem_callbacks/mem_callbacks.o 00:03:00.227 CC examples/accel/perf/accel_perf.o 00:03:00.227 CC test/env/vtophys/vtophys.o 00:03:00.227 LINK nvme_fuzz 00:03:00.227 CC test/event/event_perf/event_perf.o 00:03:00.227 LINK spdk_trace 00:03:00.227 CXX test/cpp_headers/barrier.o 00:03:00.227 LINK vtophys 00:03:00.227 CC test/nvme/aer/aer.o 00:03:00.227 LINK event_perf 00:03:00.227 CC test/event/reactor/reactor.o 00:03:00.227 LINK accel_perf 00:03:00.485 LINK spdk_top 00:03:00.485 CC test/nvme/reset/reset.o 00:03:00.485 LINK reactor 00:03:00.485 LINK aer 00:03:00.485 CC examples/blob/hello_world/hello_blob.o 00:03:00.485 CC app/fio/nvme/fio_plugin.o 00:03:00.485 CXX test/cpp_headers/base64.o 00:03:00.485 CC test/event/reactor_perf/reactor_perf.o 00:03:00.485 CXX test/cpp_headers/bdev.o 00:03:00.485 CC test/nvme/sgl/sgl.o 00:03:00.485 LINK iscsi_fuzz 00:03:00.485 CC examples/blob/cli/blobcli.o 00:03:00.485 LINK reset 00:03:00.485 LINK reactor_perf 00:03:00.485 LINK hello_blob 00:03:00.485 CC test/app/stub/stub.o 00:03:00.485 LINK sgl 00:03:00.485 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:00.743 fio_plugin.c:1582:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:03:00.743 struct spdk_nvme_fdp_ruhs ruhs; 00:03:00.743 ^ 00:03:00.743 CC examples/nvme/hello_world/hello_world.o 00:03:00.743 CXX test/cpp_headers/bdev_module.o 00:03:00.743 CC test/rpc_client/rpc_client_test.o 00:03:00.743 LINK blobcli 00:03:00.743 LINK env_dpdk_post_init 00:03:00.743 1 CC examples/nvme/reconnect/reconnect.o 00:03:00.743 warning generated. 00:03:00.743 CC test/nvme/e2edp/nvme_dp.o 00:03:00.743 LINK mem_callbacks 00:03:00.743 LINK spdk_nvme 00:03:00.743 LINK stub 00:03:00.743 LINK rpc_client_test 00:03:00.743 LINK hello_world 00:03:00.743 CXX test/cpp_headers/bdev_zone.o 00:03:00.743 CC test/env/memory/memory_ut.o 00:03:00.743 CC app/fio/bdev/fio_plugin.o 00:03:00.743 CXX test/cpp_headers/bit_array.o 00:03:00.743 LINK nvme_dp 00:03:00.743 CC test/nvme/overhead/overhead.o 00:03:00.743 LINK reconnect 00:03:00.743 CC test/nvme/err_injection/err_injection.o 00:03:01.001 CC examples/bdev/hello_world/hello_bdev.o 00:03:01.001 CC test/env/pci/pci_ut.o 00:03:01.001 LINK overhead 00:03:01.001 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:01.001 LINK err_injection 00:03:01.001 CXX test/cpp_headers/bit_pool.o 00:03:01.001 CC examples/nvme/arbitration/arbitration.o 00:03:01.001 LINK hello_bdev 00:03:01.001 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:03:01.001 LINK pci_ut 00:03:01.001 LINK spdk_bdev 00:03:01.001 CC test/nvme/startup/startup.o 00:03:01.001 CC test/accel/dif/dif.o 00:03:01.001 LINK histogram_ut 00:03:01.001 CC examples/nvme/hotplug/hotplug.o 00:03:01.259 CXX test/cpp_headers/blob.o 00:03:01.259 LINK nvme_manage 00:03:01.259 LINK arbitration 00:03:01.259 LINK startup 00:03:01.259 CC test/nvme/reserve/reserve.o 00:03:01.259 CC examples/bdev/bdevperf/bdevperf.o 00:03:01.259 LINK hotplug 00:03:01.259 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:01.259 LINK reserve 00:03:01.259 LINK dif 00:03:01.259 CC test/unit/lib/log/log.c/log_ut.o 00:03:01.259 CC test/blobfs/mkfs/mkfs.o 00:03:01.259 CXX test/cpp_headers/blob_bdev.o 00:03:01.259 CC test/unit/lib/rdma/common.c/common_ut.o 00:03:01.259 gmake[2]: Nothing to be done for 'all'. 00:03:01.259 LINK cmb_copy 00:03:01.259 CC test/nvme/simple_copy/simple_copy.o 00:03:01.259 CC examples/nvme/abort/abort.o 00:03:01.517 CC test/nvme/connect_stress/connect_stress.o 00:03:01.517 LINK bdevperf 00:03:01.517 LINK mkfs 00:03:01.517 LINK log_ut 00:03:01.517 LINK memory_ut 00:03:01.517 LINK simple_copy 00:03:01.517 LINK connect_stress 00:03:01.517 CC test/nvme/boot_partition/boot_partition.o 00:03:01.517 LINK abort 00:03:01.517 CXX test/cpp_headers/blobfs.o 00:03:01.517 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:01.517 CC test/bdev/bdevio/bdevio.o 00:03:01.517 LINK common_ut 00:03:01.517 CC test/nvme/compliance/nvme_compliance.o 00:03:01.517 CC test/nvme/fused_ordering/fused_ordering.o 00:03:01.517 CXX test/cpp_headers/blobfs_bdev.o 00:03:01.517 CXX test/cpp_headers/conf.o 00:03:01.517 LINK boot_partition 00:03:01.517 LINK pmr_persistence 00:03:01.775 CC test/unit/lib/util/base64.c/base64_ut.o 00:03:01.775 CXX test/cpp_headers/config.o 00:03:01.775 LINK fused_ordering 00:03:01.775 CC test/unit/lib/dma/dma.c/dma_ut.o 00:03:01.775 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:03:01.775 LINK base64_ut 00:03:01.775 CXX test/cpp_headers/cpuset.o 00:03:01.775 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:03:01.775 LINK bdevio 00:03:01.775 LINK nvme_compliance 00:03:01.775 CC examples/nvmf/nvmf/nvmf.o 00:03:01.775 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:03:01.775 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:01.775 CXX test/cpp_headers/crc16.o 00:03:01.775 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:03:01.775 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:03:02.034 LINK cpuset_ut 00:03:02.034 LINK doorbell_aers 00:03:02.034 LINK bit_array_ut 00:03:02.034 CC test/nvme/fdp/fdp.o 00:03:02.034 CXX test/cpp_headers/crc32.o 00:03:02.034 LINK crc32_ieee_ut 00:03:02.034 LINK crc16_ut 00:03:02.034 LINK ioat_ut 00:03:02.034 LINK nvmf 00:03:02.034 CXX test/cpp_headers/crc64.o 00:03:02.034 CXX test/cpp_headers/dif.o 00:03:02.034 LINK dma_ut 00:03:02.034 CXX test/cpp_headers/dma.o 00:03:02.034 CXX test/cpp_headers/endian.o 00:03:02.034 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:03:02.034 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:03:02.034 CC test/unit/lib/util/dif.c/dif_ut.o 00:03:02.034 LINK fdp 00:03:02.034 LINK crc32c_ut 00:03:02.034 CC test/unit/lib/util/iov.c/iov_ut.o 00:03:02.034 LINK crc64_ut 00:03:02.034 CXX test/cpp_headers/env.o 00:03:02.034 CXX test/cpp_headers/env_dpdk.o 00:03:02.293 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:03:02.293 CC test/unit/lib/util/math.c/math_ut.o 00:03:02.293 CXX test/cpp_headers/event.o 00:03:02.293 CC test/unit/lib/util/string.c/string_ut.o 00:03:02.293 CC test/unit/lib/util/xor.c/xor_ut.o 00:03:02.293 CXX test/cpp_headers/fd.o 00:03:02.293 LINK iov_ut 00:03:02.293 CXX test/cpp_headers/fd_group.o 00:03:02.293 LINK math_ut 00:03:02.293 CXX test/cpp_headers/file.o 00:03:02.293 CXX test/cpp_headers/ftl.o 00:03:02.293 LINK string_ut 00:03:02.293 CXX test/cpp_headers/gpt_spec.o 00:03:02.293 LINK xor_ut 00:03:02.293 CXX test/cpp_headers/hexlify.o 00:03:02.293 CXX test/cpp_headers/histogram_data.o 00:03:02.293 CXX test/cpp_headers/idxd.o 00:03:02.293 CXX test/cpp_headers/idxd_spec.o 00:03:02.293 CXX test/cpp_headers/init.o 00:03:02.293 LINK dif_ut 00:03:02.293 LINK pipe_ut 00:03:02.574 CXX test/cpp_headers/ioat.o 00:03:02.574 CXX test/cpp_headers/ioat_spec.o 00:03:02.574 CXX test/cpp_headers/iscsi_spec.o 00:03:02.574 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:03:02.574 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:03:02.574 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:03:02.574 CXX test/cpp_headers/json.o 00:03:02.574 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:03:02.574 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:03:02.574 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:03:02.574 CXX test/cpp_headers/jsonrpc.o 00:03:02.574 CXX test/cpp_headers/keyring.o 00:03:02.574 CXX test/cpp_headers/keyring_module.o 00:03:02.574 LINK pci_event_ut 00:03:02.574 CXX test/cpp_headers/likely.o 00:03:02.833 CXX test/cpp_headers/log.o 00:03:02.833 LINK json_util_ut 00:03:02.833 CXX test/cpp_headers/lvol.o 00:03:02.833 LINK idxd_user_ut 00:03:02.833 CXX test/cpp_headers/memory.o 00:03:02.833 CXX test/cpp_headers/mmio.o 00:03:02.833 CXX test/cpp_headers/nbd.o 00:03:02.833 CXX test/cpp_headers/notify.o 00:03:02.833 CXX test/cpp_headers/nvme.o 00:03:02.833 LINK idxd_ut 00:03:02.833 CXX test/cpp_headers/nvme_intel.o 00:03:02.833 CXX test/cpp_headers/nvme_ocssd.o 00:03:02.833 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:02.833 CXX test/cpp_headers/nvme_spec.o 00:03:02.833 CXX test/cpp_headers/nvme_zns.o 00:03:02.833 LINK json_write_ut 00:03:02.833 CXX test/cpp_headers/nvmf.o 00:03:02.833 CXX test/cpp_headers/nvmf_cmd.o 00:03:02.833 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:03.091 CXX test/cpp_headers/nvmf_spec.o 00:03:03.091 CXX test/cpp_headers/nvmf_transport.o 00:03:03.091 CXX test/cpp_headers/opal.o 00:03:03.091 LINK json_parse_ut 00:03:03.091 CXX test/cpp_headers/opal_spec.o 00:03:03.091 CXX test/cpp_headers/pci_ids.o 00:03:03.091 CXX test/cpp_headers/pipe.o 00:03:03.091 CXX test/cpp_headers/queue.o 00:03:03.091 CXX test/cpp_headers/reduce.o 00:03:03.091 CXX test/cpp_headers/rpc.o 00:03:03.091 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:03:03.091 CXX test/cpp_headers/scheduler.o 00:03:03.091 CXX test/cpp_headers/scsi.o 00:03:03.091 CXX test/cpp_headers/scsi_spec.o 00:03:03.091 CXX test/cpp_headers/sock.o 00:03:03.091 CXX test/cpp_headers/stdinc.o 00:03:03.091 CXX test/cpp_headers/string.o 00:03:03.091 CXX test/cpp_headers/thread.o 00:03:03.350 LINK jsonrpc_server_ut 00:03:03.350 CXX test/cpp_headers/trace.o 00:03:03.350 CXX test/cpp_headers/trace_parser.o 00:03:03.350 CXX test/cpp_headers/tree.o 00:03:03.350 CXX test/cpp_headers/ublk.o 00:03:03.350 CXX test/cpp_headers/util.o 00:03:03.350 CXX test/cpp_headers/uuid.o 00:03:03.350 CXX test/cpp_headers/version.o 00:03:03.350 CXX test/cpp_headers/vfio_user_pci.o 00:03:03.350 CXX test/cpp_headers/vfio_user_spec.o 00:03:03.350 CXX test/cpp_headers/vhost.o 00:03:03.350 CXX test/cpp_headers/vmd.o 00:03:03.350 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:03:03.350 CXX test/cpp_headers/xor.o 00:03:03.350 CXX test/cpp_headers/zipf.o 00:03:03.608 LINK rpc_ut 00:03:03.866 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:03:03.866 CC test/unit/lib/thread/thread.c/thread_ut.o 00:03:03.866 CC test/unit/lib/sock/sock.c/sock_ut.o 00:03:03.866 CC test/unit/lib/sock/posix.c/posix_ut.o 00:03:03.866 CC test/unit/lib/keyring/keyring.c/keyring_ut.o 00:03:03.867 CC test/unit/lib/notify/notify.c/notify_ut.o 00:03:03.867 LINK keyring_ut 00:03:04.125 LINK iobuf_ut 00:03:04.125 LINK notify_ut 00:03:04.125 LINK posix_ut 00:03:04.125 LINK thread_ut 00:03:04.383 LINK sock_ut 00:03:04.383 CC test/unit/lib/accel/accel.c/accel_ut.o 00:03:04.383 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:03:04.383 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:03:04.383 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:03:04.383 CC test/unit/lib/blob/blob.c/blob_ut.o 00:03:04.383 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:03:04.383 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:03:04.383 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:03:04.383 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:03:04.383 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:03:04.642 LINK rpc_ut 00:03:04.642 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:03:04.642 LINK subsystem_ut 00:03:04.642 LINK blob_bdev_ut 00:03:04.642 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:03:04.899 CC test/unit/lib/event/app.c/app_ut.o 00:03:04.899 LINK accel_ut 00:03:05.157 LINK app_ut 00:03:05.157 LINK nvme_ctrlr_cmd_ut 00:03:05.158 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:03:05.158 LINK nvme_ctrlr_ocssd_cmd_ut 00:03:05.158 LINK nvme_ns_ut 00:03:05.158 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:03:05.158 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:03:05.158 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:03:05.158 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:03:05.158 LINK nvme_ut 00:03:05.416 CC test/unit/lib/bdev/part.c/part_ut.o 00:03:05.416 LINK reactor_ut 00:03:05.416 LINK nvme_ns_ocssd_cmd_ut 00:03:05.416 LINK nvme_ns_cmd_ut 00:03:05.416 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:03:05.674 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:03:05.674 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:03:05.674 LINK nvme_ctrlr_ut 00:03:05.674 LINK scsi_nvme_ut 00:03:05.674 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:03:05.674 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:03:05.674 LINK nvme_poll_group_ut 00:03:05.932 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:03:05.932 LINK nvme_qpair_ut 00:03:05.932 LINK gpt_ut 00:03:05.932 LINK nvme_pcie_ut 00:03:05.932 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:03:05.932 LINK blob_ut 00:03:05.932 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:03:05.932 LINK nvme_quirks_ut 00:03:06.189 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:03:06.189 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:03:06.189 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:03:06.189 LINK part_ut 00:03:06.189 LINK tree_ut 00:03:06.189 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:03:06.189 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:03:06.189 LINK bdev_ut 00:03:06.447 LINK vbdev_lvol_ut 00:03:06.447 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:03:06.447 LINK nvme_transport_ut 00:03:06.447 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:03:06.447 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:03:06.706 LINK bdev_raid_sb_ut 00:03:06.706 LINK bdev_raid_ut 00:03:06.706 LINK blobfs_async_ut 00:03:06.706 LINK nvme_io_msg_ut 00:03:06.706 LINK nvme_tcp_ut 00:03:06.706 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:03:06.706 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:03:06.706 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:03:06.706 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:03:06.706 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:03:06.706 LINK nvme_pcie_common_ut 00:03:06.706 LINK nvme_opal_ut 00:03:06.706 LINK bdev_zone_ut 00:03:06.963 CC test/unit/lib/bdev/raid/raid0.c/raid0_ut.o 00:03:06.963 LINK bdev_ut 00:03:06.963 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:03:06.963 LINK nvme_fabric_ut 00:03:06.963 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:03:06.963 LINK concat_ut 00:03:06.963 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:03:06.963 LINK raid1_ut 00:03:06.963 LINK blobfs_bdev_ut 00:03:06.963 LINK blobfs_sync_ut 00:03:06.963 LINK vbdev_zone_block_ut 00:03:07.221 LINK lvol_ut 00:03:07.221 LINK raid0_ut 00:03:07.786 LINK nvme_rdma_ut 00:03:08.045 LINK bdev_nvme_ut 00:03:08.302 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:03:08.302 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:03:08.303 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:03:08.303 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:03:08.303 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:03:08.303 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:03:08.303 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:03:08.303 CC test/unit/lib/nvmf/auth.c/auth_ut.o 00:03:08.303 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:03:08.303 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:03:08.303 LINK dev_ut 00:03:08.560 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:03:08.560 LINK ctrlr_bdev_ut 00:03:08.560 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:03:08.817 LINK nvmf_ut 00:03:08.817 LINK auth_ut 00:03:08.817 LINK ctrlr_discovery_ut 00:03:08.817 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:03:08.817 LINK lun_ut 00:03:08.817 LINK scsi_ut 00:03:08.817 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:03:08.817 LINK subsystem_ut 00:03:08.817 LINK transport_ut 00:03:09.075 LINK ctrlr_ut 00:03:09.075 LINK rdma_ut 00:03:09.075 LINK scsi_pr_ut 00:03:09.075 LINK scsi_bdev_ut 00:03:09.346 LINK tcp_ut 00:03:09.346 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:03:09.346 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:03:09.346 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:03:09.346 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:03:09.346 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:03:09.346 CC test/unit/lib/iscsi/param.c/param_ut.o 00:03:09.604 LINK init_grp_ut 00:03:09.604 LINK param_ut 00:03:09.604 LINK portal_grp_ut 00:03:09.604 LINK conn_ut 00:03:09.604 LINK tgt_node_ut 00:03:09.863 LINK iscsi_ut 00:03:10.121 00:03:10.121 real 1m4.450s 00:03:10.121 user 4m31.036s 00:03:10.121 sys 0m47.256s 00:03:10.121 17:22:05 unittest_build -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:10.121 17:22:05 unittest_build -- common/autotest_common.sh@10 -- $ set +x 00:03:10.121 ************************************ 00:03:10.121 END TEST unittest_build 00:03:10.121 ************************************ 00:03:10.121 17:22:05 -- common/autotest_common.sh@1142 -- $ return 0 00:03:10.121 17:22:05 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:10.121 17:22:05 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:10.121 17:22:05 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:10.121 17:22:05 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.121 17:22:05 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:10.121 17:22:05 -- pm/common@44 -- $ pid=1274 00:03:10.121 17:22:05 -- pm/common@50 -- $ kill -TERM 1274 00:03:10.121 17:22:05 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:10.121 17:22:05 -- nvmf/common.sh@7 -- # uname -s 00:03:10.121 17:22:05 -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:03:10.121 17:22:05 -- nvmf/common.sh@7 -- # return 0 00:03:10.121 17:22:05 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:10.121 17:22:05 -- spdk/autotest.sh@32 -- # uname -s 00:03:10.121 17:22:05 -- spdk/autotest.sh@32 -- # '[' FreeBSD = Linux ']' 00:03:10.121 17:22:05 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:10.121 17:22:05 -- pm/common@17 -- # local monitor 00:03:10.121 17:22:05 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.121 17:22:05 -- pm/common@25 -- # sleep 1 00:03:10.121 17:22:05 -- pm/common@21 -- # date +%s 00:03:10.121 17:22:05 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1721064125 00:03:10.121 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1721064125_collect-vmstat.pm.log 00:03:11.493 17:22:06 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:11.493 17:22:06 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:11.493 17:22:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:11.493 17:22:06 -- common/autotest_common.sh@10 -- # set +x 00:03:11.493 17:22:06 -- spdk/autotest.sh@59 -- # create_test_list 00:03:11.493 17:22:06 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:11.493 17:22:06 -- common/autotest_common.sh@10 -- # set +x 00:03:11.493 17:22:06 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:11.493 17:22:06 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:11.493 17:22:06 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:11.494 17:22:06 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:11.494 17:22:06 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:11.494 17:22:06 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:11.494 17:22:06 -- common/autotest_common.sh@1455 -- # uname 00:03:11.494 17:22:06 -- common/autotest_common.sh@1455 -- # '[' FreeBSD = FreeBSD ']' 00:03:11.494 17:22:06 -- common/autotest_common.sh@1456 -- # kldunload contigmem.ko 00:03:11.494 kldunload: can't find file contigmem.ko 00:03:11.494 17:22:06 -- common/autotest_common.sh@1456 -- # true 00:03:11.494 17:22:06 -- common/autotest_common.sh@1457 -- # '[' -n '' ']' 00:03:11.494 17:22:06 -- common/autotest_common.sh@1463 -- # cp -f /home/vagrant/spdk_repo/spdk/dpdk/build/kmod/contigmem.ko /boot/modules/ 00:03:11.494 17:22:07 -- common/autotest_common.sh@1464 -- # cp -f /home/vagrant/spdk_repo/spdk/dpdk/build/kmod/contigmem.ko /boot/kernel/ 00:03:11.494 17:22:07 -- common/autotest_common.sh@1465 -- # cp -f /home/vagrant/spdk_repo/spdk/dpdk/build/kmod/nic_uio.ko /boot/modules/ 00:03:11.494 17:22:07 -- common/autotest_common.sh@1466 -- # cp -f /home/vagrant/spdk_repo/spdk/dpdk/build/kmod/nic_uio.ko /boot/kernel/ 00:03:11.494 17:22:07 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:11.494 17:22:07 -- common/autotest_common.sh@1475 -- # uname 00:03:11.494 17:22:07 -- common/autotest_common.sh@1475 -- # [[ FreeBSD = FreeBSD ]] 00:03:11.494 17:22:07 -- common/autotest_common.sh@1475 -- # sysctl -n kern.ipc.maxsockbuf 00:03:11.494 17:22:07 -- common/autotest_common.sh@1475 -- # (( 2097152 < 4194304 )) 00:03:11.494 17:22:07 -- common/autotest_common.sh@1476 -- # sysctl kern.ipc.maxsockbuf=4194304 00:03:11.494 kern.ipc.maxsockbuf: 2097152 -> 4194304 00:03:11.494 17:22:07 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:11.494 17:22:07 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:03:11.494 17:22:07 -- spdk/autotest.sh@72 -- # hash lcov 00:03:11.494 /home/vagrant/spdk_repo/spdk/autotest.sh: line 72: hash: lcov: not found 00:03:11.494 17:22:07 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:11.494 17:22:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:11.494 17:22:07 -- common/autotest_common.sh@10 -- # set +x 00:03:11.494 17:22:07 -- spdk/autotest.sh@91 -- # rm -f 00:03:11.494 17:22:07 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:11.494 kldunload: can't find file contigmem.ko 00:03:11.494 kldunload: can't find file nic_uio.ko 00:03:11.494 17:22:07 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:11.494 17:22:07 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:11.494 17:22:07 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:11.494 17:22:07 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:11.494 17:22:07 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:11.494 17:22:07 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:11.494 17:22:07 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:11.494 17:22:07 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0ns1 00:03:11.494 17:22:07 -- scripts/common.sh@378 -- # local block=/dev/nvme0ns1 pt 00:03:11.494 17:22:07 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0ns1 00:03:11.494 nvme0ns1 is not a block device 00:03:11.494 17:22:07 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0ns1 00:03:11.494 /home/vagrant/spdk_repo/spdk/scripts/common.sh: line 391: blkid: command not found 00:03:11.494 17:22:07 -- scripts/common.sh@391 -- # pt= 00:03:11.494 17:22:07 -- scripts/common.sh@392 -- # return 1 00:03:11.494 17:22:07 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0ns1 bs=1M count=1 00:03:11.494 1+0 records in 00:03:11.494 1+0 records out 00:03:11.494 1048576 bytes transferred in 0.005141 secs (203978945 bytes/sec) 00:03:11.494 17:22:07 -- spdk/autotest.sh@118 -- # sync 00:03:12.060 17:22:07 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:12.060 17:22:07 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:12.060 17:22:07 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:12.626 17:22:08 -- spdk/autotest.sh@124 -- # uname -s 00:03:12.626 17:22:08 -- spdk/autotest.sh@124 -- # '[' FreeBSD = Linux ']' 00:03:12.626 17:22:08 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:12.626 Contigmem (not present) 00:03:12.626 Buffer Size: not set 00:03:12.626 Num Buffers: not set 00:03:12.626 00:03:12.626 00:03:12.626 Type BDF Vendor Device Driver 00:03:12.626 NVMe 0:0:16:0 0x1b36 0x0010 nvme0 00:03:12.626 17:22:08 -- spdk/autotest.sh@130 -- # uname -s 00:03:12.626 17:22:08 -- spdk/autotest.sh@130 -- # [[ FreeBSD == Linux ]] 00:03:12.626 17:22:08 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:12.626 17:22:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:12.626 17:22:08 -- common/autotest_common.sh@10 -- # set +x 00:03:12.884 17:22:08 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:12.884 17:22:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:12.884 17:22:08 -- common/autotest_common.sh@10 -- # set +x 00:03:12.884 17:22:08 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:12.884 kldunload: can't find file nic_uio.ko 00:03:12.884 hw.nic_uio.bdfs="0:16:0" 00:03:13.141 hw.contigmem.num_buffers="8" 00:03:13.141 hw.contigmem.buffer_size="268435456" 00:03:13.708 17:22:09 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:13.708 17:22:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:13.708 17:22:09 -- common/autotest_common.sh@10 -- # set +x 00:03:13.708 17:22:09 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:13.708 17:22:09 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:03:13.708 17:22:09 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:03:13.708 17:22:09 -- common/autotest_common.sh@1577 -- # bdfs=() 00:03:13.708 17:22:09 -- common/autotest_common.sh@1577 -- # local bdfs 00:03:13.708 17:22:09 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:03:13.708 17:22:09 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:13.708 17:22:09 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:13.708 17:22:09 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:13.708 17:22:09 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:13.708 17:22:09 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:13.708 17:22:09 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:13.708 17:22:09 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:03:13.708 17:22:09 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:13.708 17:22:09 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:13.708 cat: /sys/bus/pci/devices/0000:00:10.0/device: No such file or directory 00:03:13.708 17:22:09 -- common/autotest_common.sh@1580 -- # device= 00:03:13.708 17:22:09 -- common/autotest_common.sh@1580 -- # true 00:03:13.708 17:22:09 -- common/autotest_common.sh@1581 -- # [[ '' == \0\x\0\a\5\4 ]] 00:03:13.708 17:22:09 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:03:13.708 17:22:09 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:03:13.708 17:22:09 -- common/autotest_common.sh@1593 -- # return 0 00:03:13.708 17:22:09 -- spdk/autotest.sh@150 -- # '[' 1 -eq 1 ']' 00:03:13.708 17:22:09 -- spdk/autotest.sh@151 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:13.708 17:22:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:13.708 17:22:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:13.708 17:22:09 -- common/autotest_common.sh@10 -- # set +x 00:03:13.708 ************************************ 00:03:13.708 START TEST unittest 00:03:13.708 ************************************ 00:03:13.708 17:22:09 unittest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:13.708 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:13.708 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:03:13.708 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:03:13.708 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:13.708 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:03:13.708 + rootdir=/home/vagrant/spdk_repo/spdk 00:03:13.708 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:03:13.708 ++ rpc_py=rpc_cmd 00:03:13.708 ++ set -e 00:03:13.708 ++ shopt -s nullglob 00:03:13.708 ++ shopt -s extglob 00:03:13.708 ++ shopt -s inherit_errexit 00:03:13.708 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:03:13.708 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:03:13.708 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:03:13.708 +++ CONFIG_WPDK_DIR= 00:03:13.708 +++ CONFIG_ASAN=n 00:03:13.708 +++ CONFIG_VBDEV_COMPRESS=n 00:03:13.708 +++ CONFIG_HAVE_EXECINFO_H=y 00:03:13.708 +++ CONFIG_USDT=n 00:03:13.708 +++ CONFIG_CUSTOMOCF=n 00:03:13.708 +++ CONFIG_PREFIX=/usr/local 00:03:13.708 +++ CONFIG_RBD=n 00:03:13.708 +++ CONFIG_LIBDIR= 00:03:13.708 +++ CONFIG_IDXD=y 00:03:13.708 +++ CONFIG_NVME_CUSE=n 00:03:13.708 +++ CONFIG_SMA=n 00:03:13.708 +++ CONFIG_VTUNE=n 00:03:13.708 +++ CONFIG_TSAN=n 00:03:13.708 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:03:13.708 +++ CONFIG_VFIO_USER_DIR= 00:03:13.708 +++ CONFIG_PGO_CAPTURE=n 00:03:13.708 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=n 00:03:13.708 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:13.708 +++ CONFIG_LTO=n 00:03:13.708 +++ CONFIG_ISCSI_INITIATOR=n 00:03:13.708 +++ CONFIG_CET=n 00:03:13.708 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:03:13.708 +++ CONFIG_OCF_PATH= 00:03:13.708 +++ CONFIG_RDMA_SET_TOS=y 00:03:13.708 +++ CONFIG_HAVE_ARC4RANDOM=y 00:03:13.708 +++ CONFIG_HAVE_LIBARCHIVE=n 00:03:13.708 +++ CONFIG_UBLK=n 00:03:13.708 +++ CONFIG_ISAL_CRYPTO=y 00:03:13.708 +++ CONFIG_OPENSSL_PATH= 00:03:13.708 +++ CONFIG_OCF=n 00:03:13.708 +++ CONFIG_FUSE=n 00:03:13.708 +++ CONFIG_VTUNE_DIR= 00:03:13.708 +++ CONFIG_FUZZER_LIB= 00:03:13.708 +++ CONFIG_FUZZER=n 00:03:13.708 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:03:13.708 +++ CONFIG_CRYPTO=n 00:03:13.708 +++ CONFIG_PGO_USE=n 00:03:13.708 +++ CONFIG_VHOST=n 00:03:13.708 +++ CONFIG_DAOS=n 00:03:13.708 +++ CONFIG_DPDK_INC_DIR= 00:03:13.708 +++ CONFIG_DAOS_DIR= 00:03:13.708 +++ CONFIG_UNIT_TESTS=y 00:03:13.708 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=n 00:03:13.708 +++ CONFIG_VIRTIO=n 00:03:13.708 +++ CONFIG_DPDK_UADK=n 00:03:13.708 +++ CONFIG_COVERAGE=n 00:03:13.708 +++ CONFIG_RDMA=y 00:03:13.708 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:03:13.708 +++ CONFIG_URING_PATH= 00:03:13.708 +++ CONFIG_XNVME=n 00:03:13.708 +++ CONFIG_VFIO_USER=n 00:03:13.708 +++ CONFIG_ARCH=native 00:03:13.708 +++ CONFIG_HAVE_EVP_MAC=y 00:03:13.708 +++ CONFIG_URING_ZNS=n 00:03:13.708 +++ CONFIG_WERROR=y 00:03:13.708 +++ CONFIG_HAVE_LIBBSD=n 00:03:13.708 +++ CONFIG_UBSAN=n 00:03:13.708 +++ CONFIG_IPSEC_MB_DIR= 00:03:13.708 +++ CONFIG_GOLANG=n 00:03:13.709 +++ CONFIG_ISAL=y 00:03:13.709 +++ CONFIG_IDXD_KERNEL=n 00:03:13.709 +++ CONFIG_DPDK_LIB_DIR= 00:03:13.709 +++ CONFIG_RDMA_PROV=verbs 00:03:13.709 +++ CONFIG_APPS=y 00:03:13.709 +++ CONFIG_SHARED=n 00:03:13.709 +++ CONFIG_HAVE_KEYUTILS=n 00:03:13.709 +++ CONFIG_FC_PATH= 00:03:13.709 +++ CONFIG_DPDK_PKG_CONFIG=n 00:03:13.709 +++ CONFIG_FC=n 00:03:13.709 +++ CONFIG_AVAHI=n 00:03:13.709 +++ CONFIG_FIO_PLUGIN=y 00:03:13.709 +++ CONFIG_RAID5F=n 00:03:13.709 +++ CONFIG_EXAMPLES=y 00:03:13.709 +++ CONFIG_TESTS=y 00:03:13.709 +++ CONFIG_CRYPTO_MLX5=n 00:03:13.709 +++ CONFIG_MAX_LCORES=128 00:03:13.709 +++ CONFIG_IPSEC_MB=n 00:03:13.709 +++ CONFIG_PGO_DIR= 00:03:13.709 +++ CONFIG_DEBUG=y 00:03:13.709 +++ CONFIG_DPDK_COMPRESSDEV=n 00:03:13.709 +++ CONFIG_CROSS_PREFIX= 00:03:13.709 +++ CONFIG_URING=n 00:03:13.709 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:03:13.709 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:03:13.709 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:03:13.709 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:03:13.709 +++ _root=/home/vagrant/spdk_repo/spdk 00:03:13.709 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:03:13.709 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:03:13.709 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:03:13.709 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:03:13.709 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:03:13.709 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:03:13.709 +++ VHOST_APP=("$_app_dir/vhost") 00:03:13.709 +++ DD_APP=("$_app_dir/spdk_dd") 00:03:13.709 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:03:13.709 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:03:13.709 +++ [[ #ifndef SPDK_CONFIG_H 00:03:13.709 #define SPDK_CONFIG_H 00:03:13.709 #define SPDK_CONFIG_APPS 1 00:03:13.709 #define SPDK_CONFIG_ARCH native 00:03:13.709 #undef SPDK_CONFIG_ASAN 00:03:13.709 #undef SPDK_CONFIG_AVAHI 00:03:13.709 #undef SPDK_CONFIG_CET 00:03:13.709 #undef SPDK_CONFIG_COVERAGE 00:03:13.709 #define SPDK_CONFIG_CROSS_PREFIX 00:03:13.709 #undef SPDK_CONFIG_CRYPTO 00:03:13.709 #undef SPDK_CONFIG_CRYPTO_MLX5 00:03:13.709 #undef SPDK_CONFIG_CUSTOMOCF 00:03:13.709 #undef SPDK_CONFIG_DAOS 00:03:13.709 #define SPDK_CONFIG_DAOS_DIR 00:03:13.709 #define SPDK_CONFIG_DEBUG 1 00:03:13.709 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:03:13.709 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:13.709 #define SPDK_CONFIG_DPDK_INC_DIR 00:03:13.709 #define SPDK_CONFIG_DPDK_LIB_DIR 00:03:13.709 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:03:13.709 #undef SPDK_CONFIG_DPDK_UADK 00:03:13.709 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:13.709 #define SPDK_CONFIG_EXAMPLES 1 00:03:13.709 #undef SPDK_CONFIG_FC 00:03:13.709 #define SPDK_CONFIG_FC_PATH 00:03:13.709 #define SPDK_CONFIG_FIO_PLUGIN 1 00:03:13.709 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:03:13.709 #undef SPDK_CONFIG_FUSE 00:03:13.709 #undef SPDK_CONFIG_FUZZER 00:03:13.709 #define SPDK_CONFIG_FUZZER_LIB 00:03:13.709 #undef SPDK_CONFIG_GOLANG 00:03:13.709 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:03:13.709 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:03:13.709 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:03:13.709 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:03:13.709 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:03:13.709 #undef SPDK_CONFIG_HAVE_LIBBSD 00:03:13.709 #undef SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 00:03:13.709 #define SPDK_CONFIG_IDXD 1 00:03:13.709 #undef SPDK_CONFIG_IDXD_KERNEL 00:03:13.709 #undef SPDK_CONFIG_IPSEC_MB 00:03:13.709 #define SPDK_CONFIG_IPSEC_MB_DIR 00:03:13.709 #define SPDK_CONFIG_ISAL 1 00:03:13.709 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:03:13.709 #undef SPDK_CONFIG_ISCSI_INITIATOR 00:03:13.709 #define SPDK_CONFIG_LIBDIR 00:03:13.709 #undef SPDK_CONFIG_LTO 00:03:13.709 #define SPDK_CONFIG_MAX_LCORES 128 00:03:13.709 #undef SPDK_CONFIG_NVME_CUSE 00:03:13.709 #undef SPDK_CONFIG_OCF 00:03:13.709 #define SPDK_CONFIG_OCF_PATH 00:03:13.709 #define SPDK_CONFIG_OPENSSL_PATH 00:03:13.709 #undef SPDK_CONFIG_PGO_CAPTURE 00:03:13.709 #define SPDK_CONFIG_PGO_DIR 00:03:13.709 #undef SPDK_CONFIG_PGO_USE 00:03:13.709 #define SPDK_CONFIG_PREFIX /usr/local 00:03:13.709 #undef SPDK_CONFIG_RAID5F 00:03:13.709 #undef SPDK_CONFIG_RBD 00:03:13.709 #define SPDK_CONFIG_RDMA 1 00:03:13.709 #define SPDK_CONFIG_RDMA_PROV verbs 00:03:13.709 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:03:13.709 #undef SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 00:03:13.709 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:03:13.709 #undef SPDK_CONFIG_SHARED 00:03:13.709 #undef SPDK_CONFIG_SMA 00:03:13.709 #define SPDK_CONFIG_TESTS 1 00:03:13.709 #undef SPDK_CONFIG_TSAN 00:03:13.709 #undef SPDK_CONFIG_UBLK 00:03:13.709 #undef SPDK_CONFIG_UBSAN 00:03:13.709 #define SPDK_CONFIG_UNIT_TESTS 1 00:03:13.709 #undef SPDK_CONFIG_URING 00:03:13.709 #define SPDK_CONFIG_URING_PATH 00:03:13.709 #undef SPDK_CONFIG_URING_ZNS 00:03:13.709 #undef SPDK_CONFIG_USDT 00:03:13.709 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:03:13.709 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:03:13.709 #undef SPDK_CONFIG_VFIO_USER 00:03:13.709 #define SPDK_CONFIG_VFIO_USER_DIR 00:03:13.709 #undef SPDK_CONFIG_VHOST 00:03:13.709 #undef SPDK_CONFIG_VIRTIO 00:03:13.709 #undef SPDK_CONFIG_VTUNE 00:03:13.709 #define SPDK_CONFIG_VTUNE_DIR 00:03:13.709 #define SPDK_CONFIG_WERROR 1 00:03:13.709 #define SPDK_CONFIG_WPDK_DIR 00:03:13.709 #undef SPDK_CONFIG_XNVME 00:03:13.709 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:03:13.709 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:03:13.709 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:13.709 +++ [[ -e /bin/wpdk_common.sh ]] 00:03:13.709 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:13.709 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:13.709 ++++ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:03:13.709 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:03:13.709 ++++ export PATH 00:03:13.709 ++++ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:03:13.709 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:03:13.709 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:03:13.709 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:03:13.709 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:03:13.709 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:03:13.709 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:03:13.709 +++ TEST_TAG=N/A 00:03:13.709 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:03:13.709 +++ PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:03:13.709 ++++ uname -s 00:03:13.709 +++ PM_OS=FreeBSD 00:03:13.709 +++ MONITOR_RESOURCES_SUDO=() 00:03:13.709 +++ declare -A MONITOR_RESOURCES_SUDO 00:03:13.709 +++ MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:03:13.709 +++ MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:03:13.709 +++ MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:03:13.709 +++ MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:03:13.709 +++ SUDO[0]= 00:03:13.709 +++ SUDO[1]='sudo -E' 00:03:13.709 +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:03:13.709 +++ [[ FreeBSD == FreeBSD ]] 00:03:13.709 +++ MONITOR_RESOURCES=(collect-vmstat) 00:03:13.709 +++ [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:03:13.709 ++ : 0 00:03:13.709 ++ export RUN_NIGHTLY 00:03:13.709 ++ : 0 00:03:13.709 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:03:13.709 ++ : 0 00:03:13.709 ++ export SPDK_RUN_VALGRIND 00:03:13.709 ++ : 1 00:03:13.709 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:03:13.709 ++ : 1 00:03:13.709 ++ export SPDK_TEST_UNITTEST 00:03:13.709 ++ : 00:03:13.709 ++ export SPDK_TEST_AUTOBUILD 00:03:13.709 ++ : 0 00:03:13.709 ++ export SPDK_TEST_RELEASE_BUILD 00:03:13.709 ++ : 0 00:03:13.709 ++ export SPDK_TEST_ISAL 00:03:13.709 ++ : 0 00:03:13.709 ++ export SPDK_TEST_ISCSI 00:03:13.709 ++ : 0 00:03:13.709 ++ export SPDK_TEST_ISCSI_INITIATOR 00:03:13.709 ++ : 1 00:03:13.709 ++ export SPDK_TEST_NVME 00:03:13.709 ++ : 0 00:03:13.709 ++ export SPDK_TEST_NVME_PMR 00:03:13.709 ++ : 0 00:03:13.709 ++ export SPDK_TEST_NVME_BP 00:03:13.709 ++ : 0 00:03:13.709 ++ export SPDK_TEST_NVME_CLI 00:03:13.709 ++ : 0 00:03:13.709 ++ export SPDK_TEST_NVME_CUSE 00:03:13.709 ++ : 0 00:03:13.709 ++ export SPDK_TEST_NVME_FDP 00:03:13.709 ++ : 0 00:03:13.709 ++ export SPDK_TEST_NVMF 00:03:13.709 ++ : 0 00:03:13.709 ++ export SPDK_TEST_VFIOUSER 00:03:13.709 ++ : 0 00:03:13.709 ++ export SPDK_TEST_VFIOUSER_QEMU 00:03:13.709 ++ : 0 00:03:13.709 ++ export SPDK_TEST_FUZZER 00:03:13.709 ++ : 0 00:03:13.709 ++ export SPDK_TEST_FUZZER_SHORT 00:03:13.709 ++ : rdma 00:03:13.709 ++ export SPDK_TEST_NVMF_TRANSPORT 00:03:13.709 ++ : 0 00:03:13.709 ++ export SPDK_TEST_RBD 00:03:13.709 ++ : 0 00:03:13.709 ++ export SPDK_TEST_VHOST 00:03:13.709 ++ : 1 00:03:13.709 ++ export SPDK_TEST_BLOCKDEV 00:03:13.709 ++ : 0 00:03:13.709 ++ export SPDK_TEST_IOAT 00:03:13.709 ++ : 0 00:03:13.709 ++ export SPDK_TEST_BLOBFS 00:03:13.709 ++ : 0 00:03:13.709 ++ export SPDK_TEST_VHOST_INIT 00:03:13.709 ++ : 0 00:03:13.709 ++ export SPDK_TEST_LVOL 00:03:13.709 ++ : 0 00:03:13.709 ++ export SPDK_TEST_VBDEV_COMPRESS 00:03:13.709 ++ : 0 00:03:13.709 ++ export SPDK_RUN_ASAN 00:03:13.709 ++ : 0 00:03:13.709 ++ export SPDK_RUN_UBSAN 00:03:13.709 ++ : 00:03:13.709 ++ export SPDK_RUN_EXTERNAL_DPDK 00:03:13.709 ++ : 0 00:03:13.709 ++ export SPDK_RUN_NON_ROOT 00:03:13.710 ++ : 0 00:03:13.710 ++ export SPDK_TEST_CRYPTO 00:03:13.710 ++ : 0 00:03:13.710 ++ export SPDK_TEST_FTL 00:03:13.710 ++ : 0 00:03:13.710 ++ export SPDK_TEST_OCF 00:03:13.710 ++ : 0 00:03:13.710 ++ export SPDK_TEST_VMD 00:03:13.710 ++ : 0 00:03:13.710 ++ export SPDK_TEST_OPAL 00:03:13.710 ++ : 00:03:13.710 ++ export SPDK_TEST_NATIVE_DPDK 00:03:13.710 ++ : true 00:03:13.710 ++ export SPDK_AUTOTEST_X 00:03:13.710 ++ : 0 00:03:13.710 ++ export SPDK_TEST_RAID5 00:03:13.710 ++ : 0 00:03:13.710 ++ export SPDK_TEST_URING 00:03:13.710 ++ : 0 00:03:13.710 ++ export SPDK_TEST_USDT 00:03:13.710 ++ : 0 00:03:13.710 ++ export SPDK_TEST_USE_IGB_UIO 00:03:13.710 ++ : 0 00:03:13.710 ++ export SPDK_TEST_SCHEDULER 00:03:13.710 ++ : 0 00:03:13.710 ++ export SPDK_TEST_SCANBUILD 00:03:13.710 ++ : 00:03:13.710 ++ export SPDK_TEST_NVMF_NICS 00:03:13.710 ++ : 0 00:03:13.710 ++ export SPDK_TEST_SMA 00:03:13.710 ++ : 0 00:03:13.710 ++ export SPDK_TEST_DAOS 00:03:13.710 ++ : 0 00:03:13.710 ++ export SPDK_TEST_XNVME 00:03:13.710 ++ : 0 00:03:13.710 ++ export SPDK_TEST_ACCEL_DSA 00:03:13.710 ++ : 0 00:03:13.710 ++ export SPDK_TEST_ACCEL_IAA 00:03:13.710 ++ : 00:03:13.710 ++ export SPDK_TEST_FUZZER_TARGET 00:03:13.710 ++ : 0 00:03:13.710 ++ export SPDK_TEST_NVMF_MDNS 00:03:13.710 ++ : 0 00:03:13.710 ++ export SPDK_JSONRPC_GO_CLIENT 00:03:13.710 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:03:13.710 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:03:13.710 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:03:13.710 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:03:13.710 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:13.710 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:13.710 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:13.710 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:13.710 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:03:13.710 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:03:13.710 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:03:13.710 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:03:13.710 ++ export PYTHONDONTWRITEBYTECODE=1 00:03:13.710 ++ PYTHONDONTWRITEBYTECODE=1 00:03:13.710 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:03:13.710 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:03:13.710 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:03:13.710 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:03:13.710 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:03:13.710 ++ rm -rf /var/tmp/asan_suppression_file 00:03:13.710 ++ cat 00:03:13.710 ++ echo leak:libfuse3.so 00:03:13.710 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:03:13.710 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:03:13.710 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:03:13.710 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:03:13.710 ++ '[' -z /var/spdk/dependencies ']' 00:03:13.710 ++ export DEPENDENCY_DIR 00:03:13.710 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:03:13.710 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:03:13.710 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:03:13.710 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:03:13.710 ++ export QEMU_BIN= 00:03:13.710 ++ QEMU_BIN= 00:03:13.710 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:03:13.710 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:03:13.710 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:03:13.710 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:03:13.710 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:13.710 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:13.710 ++ '[' 0 -eq 0 ']' 00:03:13.710 ++ export valgrind= 00:03:13.710 ++ valgrind= 00:03:13.710 +++ uname -s 00:03:13.710 ++ '[' FreeBSD = Linux ']' 00:03:13.710 +++ uname -s 00:03:13.710 ++ '[' FreeBSD = FreeBSD ']' 00:03:13.710 ++ MAKE=gmake 00:03:13.710 +++ sysctl -a 00:03:13.710 +++ grep -E -i hw.ncpu 00:03:13.710 +++ awk '{print $2}' 00:03:13.969 ++ MAKEFLAGS=-j10 00:03:13.969 ++ HUGEMEM=2048 00:03:13.969 ++ export HUGEMEM=2048 00:03:13.969 ++ HUGEMEM=2048 00:03:13.969 ++ NO_HUGE=() 00:03:13.969 ++ TEST_MODE= 00:03:13.969 ++ [[ -z '' ]] 00:03:13.969 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:03:13.969 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:03:13.969 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:03:13.969 ++ exec 00:03:13.969 ++ set_test_storage 2147483648 00:03:13.969 ++ [[ -v testdir ]] 00:03:13.969 ++ local requested_size=2147483648 00:03:13.969 ++ local mount target_dir 00:03:13.969 ++ local -A mounts fss sizes avails uses 00:03:13.969 ++ local source fs size avail mount use 00:03:13.969 ++ local storage_fallback storage_candidates 00:03:13.969 +++ mktemp -udt spdk.XXXXXX 00:03:13.969 ++ storage_fallback=/tmp/spdk.XXXXXX.do5Q8UcbeI 00:03:13.969 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:03:13.969 ++ [[ -n '' ]] 00:03:13.969 ++ [[ -n '' ]] 00:03:13.969 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.XXXXXX.do5Q8UcbeI/tests/unit /tmp/spdk.XXXXXX.do5Q8UcbeI 00:03:13.969 ++ requested_size=2214592512 00:03:13.969 ++ read -r source fs size use avail _ mount 00:03:13.969 +++ df -T 00:03:13.969 +++ grep -v Filesystem 00:03:13.969 ++ mounts["$mount"]=/dev/gptid/043e6f36-2a13-11ef-a525-001e676338ce 00:03:13.969 ++ fss["$mount"]=ufs 00:03:13.969 ++ avails["$mount"]=17237213184 00:03:13.969 ++ sizes["$mount"]=31182712832 00:03:13.969 ++ uses["$mount"]=11450884096 00:03:13.969 ++ read -r source fs size use avail _ mount 00:03:13.969 ++ mounts["$mount"]=devfs 00:03:13.969 ++ fss["$mount"]=devfs 00:03:13.969 ++ avails["$mount"]=1024 00:03:13.969 ++ sizes["$mount"]=1024 00:03:13.969 ++ uses["$mount"]=0 00:03:13.969 ++ read -r source fs size use avail _ mount 00:03:13.969 ++ mounts["$mount"]=tmpfs 00:03:13.969 ++ fss["$mount"]=tmpfs 00:03:13.969 ++ avails["$mount"]=2147442688 00:03:13.969 ++ sizes["$mount"]=2147483648 00:03:13.969 ++ uses["$mount"]=40960 00:03:13.969 ++ read -r source fs size use avail _ mount 00:03:13.969 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest/freebsd14-libvirt/output 00:03:13.969 ++ fss["$mount"]=fusefs.sshfs 00:03:13.969 ++ avails["$mount"]=93549273088 00:03:13.969 ++ sizes["$mount"]=105088212992 00:03:13.969 ++ uses["$mount"]=6153506816 00:03:13.969 ++ read -r source fs size use avail _ mount 00:03:13.969 ++ printf '* Looking for test storage...\n' 00:03:13.969 * Looking for test storage... 00:03:13.969 ++ local target_space new_size 00:03:13.969 ++ for target_dir in "${storage_candidates[@]}" 00:03:13.969 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:03:13.969 +++ awk '$1 !~ /Filesystem/{print $6}' 00:03:13.969 ++ mount=/ 00:03:13.969 ++ target_space=17237213184 00:03:13.969 ++ (( target_space == 0 || target_space < requested_size )) 00:03:13.969 ++ (( target_space >= requested_size )) 00:03:13.969 ++ [[ ufs == tmpfs ]] 00:03:13.969 ++ [[ ufs == ramfs ]] 00:03:13.969 ++ [[ / == / ]] 00:03:13.969 ++ new_size=13665476608 00:03:13.969 ++ (( new_size * 100 / sizes[/] > 95 )) 00:03:13.969 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:03:13.969 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:03:13.969 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:03:13.969 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:03:13.969 ++ return 0 00:03:13.969 ++ set -o errtrace 00:03:13.969 ++ shopt -s extdebug 00:03:13.969 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:03:13.969 ++ PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:03:13.969 17:22:09 unittest -- common/autotest_common.sh@1687 -- # true 00:03:13.969 17:22:09 unittest -- common/autotest_common.sh@1689 -- # xtrace_fd 00:03:13.969 17:22:09 unittest -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:03:13.969 17:22:09 unittest -- common/autotest_common.sh@29 -- # exec 00:03:13.969 17:22:09 unittest -- common/autotest_common.sh@31 -- # xtrace_restore 00:03:13.969 17:22:09 unittest -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:03:13.969 17:22:09 unittest -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:03:13.969 17:22:09 unittest -- common/autotest_common.sh@18 -- # set -x 00:03:13.969 17:22:09 unittest -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:03:13.969 17:22:09 unittest -- unit/unittest.sh@153 -- # '[' 0 -eq 1 ']' 00:03:13.969 17:22:09 unittest -- unit/unittest.sh@160 -- # '[' -z x ']' 00:03:13.969 17:22:09 unittest -- unit/unittest.sh@167 -- # '[' 0 -eq 1 ']' 00:03:13.969 17:22:09 unittest -- unit/unittest.sh@180 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:03:13.969 17:22:09 unittest -- unit/unittest.sh@180 -- # CC_TYPE=CC_TYPE=clang 00:03:13.969 17:22:09 unittest -- unit/unittest.sh@181 -- # hash lcov 00:03:13.969 /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh: line 181: hash: lcov: not found 00:03:13.969 17:22:09 unittest -- unit/unittest.sh@184 -- # cov_avail=no 00:03:13.969 17:22:09 unittest -- unit/unittest.sh@186 -- # '[' no = yes ']' 00:03:13.969 17:22:09 unittest -- unit/unittest.sh@208 -- # uname -m 00:03:13.969 17:22:09 unittest -- unit/unittest.sh@208 -- # '[' amd64 = aarch64 ']' 00:03:13.969 17:22:09 unittest -- unit/unittest.sh@212 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:03:13.969 17:22:09 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:13.969 17:22:09 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:13.969 17:22:09 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:13.969 ************************************ 00:03:13.969 START TEST unittest_pci_event 00:03:13.969 ************************************ 00:03:13.969 17:22:09 unittest.unittest_pci_event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:03:13.969 00:03:13.969 00:03:13.969 CUnit - A unit testing framework for C - Version 2.1-3 00:03:13.969 http://cunit.sourceforge.net/ 00:03:13.969 00:03:13.969 00:03:13.969 Suite: pci_event 00:03:13.969 Test: test_pci_parse_event ...passed 00:03:13.969 00:03:13.969 Run Summary: Type Total Ran Passed Failed Inactive 00:03:13.969 suites 1 1 n/a 0 0 00:03:13.969 tests 1 1 1 0 0 00:03:13.969 asserts 1 1 1 0 n/a 00:03:13.969 00:03:13.969 Elapsed time = 0.000 seconds 00:03:13.969 00:03:13.969 real 0m0.023s 00:03:13.969 user 0m0.005s 00:03:13.969 sys 0m0.009s 00:03:13.969 17:22:09 unittest.unittest_pci_event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:13.969 17:22:09 unittest.unittest_pci_event -- common/autotest_common.sh@10 -- # set +x 00:03:13.969 ************************************ 00:03:13.969 END TEST unittest_pci_event 00:03:13.969 ************************************ 00:03:13.969 17:22:09 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:13.969 17:22:09 unittest -- unit/unittest.sh@213 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:03:13.969 17:22:09 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:13.969 17:22:09 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:13.969 17:22:09 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:13.969 ************************************ 00:03:13.969 START TEST unittest_include 00:03:13.969 ************************************ 00:03:13.969 17:22:09 unittest.unittest_include -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:03:13.970 00:03:13.970 00:03:13.970 CUnit - A unit testing framework for C - Version 2.1-3 00:03:13.970 http://cunit.sourceforge.net/ 00:03:13.970 00:03:13.970 00:03:13.970 Suite: histogram 00:03:13.970 Test: histogram_test ...passed 00:03:13.970 Test: histogram_merge ...passed 00:03:13.970 00:03:13.970 Run Summary: Type Total Ran Passed Failed Inactive 00:03:13.970 suites 1 1 n/a 0 0 00:03:13.970 tests 2 2 2 0 0 00:03:13.970 asserts 50 50 50 0 n/a 00:03:13.970 00:03:13.970 Elapsed time = 0.000 seconds 00:03:13.970 00:03:13.970 real 0m0.007s 00:03:13.970 user 0m0.001s 00:03:13.970 sys 0m0.007s 00:03:13.970 17:22:09 unittest.unittest_include -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:13.970 17:22:09 unittest.unittest_include -- common/autotest_common.sh@10 -- # set +x 00:03:13.970 ************************************ 00:03:13.970 END TEST unittest_include 00:03:13.970 ************************************ 00:03:13.970 17:22:09 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:13.970 17:22:09 unittest -- unit/unittest.sh@214 -- # run_test unittest_bdev unittest_bdev 00:03:13.970 17:22:09 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:13.970 17:22:09 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:13.970 17:22:09 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:13.970 ************************************ 00:03:13.970 START TEST unittest_bdev 00:03:13.970 ************************************ 00:03:13.970 17:22:09 unittest.unittest_bdev -- common/autotest_common.sh@1123 -- # unittest_bdev 00:03:13.970 17:22:09 unittest.unittest_bdev -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:03:13.970 00:03:13.970 00:03:13.970 CUnit - A unit testing framework for C - Version 2.1-3 00:03:13.970 http://cunit.sourceforge.net/ 00:03:13.970 00:03:13.970 00:03:13.970 Suite: bdev 00:03:13.970 Test: bytes_to_blocks_test ...passed 00:03:13.970 Test: num_blocks_test ...passed 00:03:13.970 Test: io_valid_test ...passed 00:03:13.970 Test: open_write_test ...[2024-07-15 17:22:09.726125] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:03:13.970 [2024-07-15 17:22:09.726402] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:03:13.970 [2024-07-15 17:22:09.726431] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:03:13.970 passed 00:03:13.970 Test: claim_test ...passed 00:03:13.970 Test: alias_add_del_test ...[2024-07-15 17:22:09.729940] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:03:13.970 [2024-07-15 17:22:09.729989] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4643:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:03:13.970 [2024-07-15 17:22:09.730010] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:03:13.970 passed 00:03:13.970 Test: get_device_stat_test ...passed 00:03:13.970 Test: bdev_io_types_test ...passed 00:03:13.970 Test: bdev_io_wait_test ...passed 00:03:13.970 Test: bdev_io_spans_split_test ...passed 00:03:13.970 Test: bdev_io_boundary_split_test ...passed 00:03:13.970 Test: bdev_io_max_size_and_segment_split_test ...[2024-07-15 17:22:09.737122] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3208:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:03:13.970 passed 00:03:13.970 Test: bdev_io_mix_split_test ...passed 00:03:13.970 Test: bdev_io_split_with_io_wait ...passed 00:03:13.970 Test: bdev_io_write_unit_split_test ...[2024-07-15 17:22:09.742528] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2760:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:03:13.970 [2024-07-15 17:22:09.742595] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2760:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:03:13.970 [2024-07-15 17:22:09.742615] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2760:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:03:13.970 [2024-07-15 17:22:09.742635] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2760:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:03:13.970 passed 00:03:13.970 Test: bdev_io_alignment_with_boundary ...passed 00:03:13.970 Test: bdev_io_alignment ...passed 00:03:13.970 Test: bdev_histograms ...passed 00:03:13.970 Test: bdev_write_zeroes ...passed 00:03:13.970 Test: bdev_compare_and_write ...passed 00:03:13.970 Test: bdev_compare ...passed 00:03:13.970 Test: bdev_compare_emulated ...passed 00:03:13.970 Test: bdev_zcopy_write ...passed 00:03:13.970 Test: bdev_zcopy_read ...passed 00:03:13.970 Test: bdev_open_while_hotremove ...passed 00:03:13.970 Test: bdev_close_while_hotremove ...passed 00:03:13.970 Test: bdev_open_ext_test ...[2024-07-15 17:22:09.757112] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8184:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:03:13.970 passed 00:03:13.970 Test: bdev_open_ext_unregister ...passed 00:03:13.970 Test: bdev_set_io_timeout ...[2024-07-15 17:22:09.757156] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8184:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:03:13.970 passed 00:03:13.970 Test: bdev_set_qd_sampling ...passed 00:03:13.970 Test: lba_range_overlap ...passed 00:03:13.970 Test: lock_lba_range_check_ranges ...passed 00:03:13.970 Test: lock_lba_range_with_io_outstanding ...passed 00:03:13.970 Test: lock_lba_range_overlapped ...passed 00:03:13.970 Test: bdev_quiesce ...[2024-07-15 17:22:09.764018] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10107:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:03:13.970 passed 00:03:13.970 Test: bdev_io_abort ...passed 00:03:13.970 Test: bdev_unmap ...passed 00:03:13.970 Test: bdev_write_zeroes_split_test ...passed 00:03:13.970 Test: bdev_set_options_test ...passed[2024-07-15 17:22:09.768001] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 502:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:03:13.970 00:03:13.970 Test: bdev_get_memory_domains ...passed 00:03:13.970 Test: bdev_io_ext ...passed 00:03:13.970 Test: bdev_io_ext_no_opts ...passed 00:03:13.970 Test: bdev_io_ext_invalid_opts ...passed 00:03:13.970 Test: bdev_io_ext_split ...passed 00:03:13.970 Test: bdev_io_ext_bounce_buffer ...passed 00:03:13.970 Test: bdev_register_uuid_alias ...[2024-07-15 17:22:09.774868] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name c469f282-42ce-11ef-96ac-773515fba644 already exists 00:03:13.970 [2024-07-15 17:22:09.774925] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:c469f282-42ce-11ef-96ac-773515fba644 alias for bdev bdev0 00:03:13.970 passed 00:03:13.970 Test: bdev_unregister_by_name ...[2024-07-15 17:22:09.775251] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7974:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:03:13.970 [2024-07-15 17:22:09.775263] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7983:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:03:13.970 passed 00:03:13.970 Test: for_each_bdev_test ...passed 00:03:13.970 Test: bdev_seek_test ...passed 00:03:13.970 Test: bdev_copy ...passed 00:03:13.970 Test: bdev_copy_split_test ...passed 00:03:13.970 Test: examine_locks ...passed 00:03:13.970 Test: claim_v2_rwo ...passed 00:03:13.970 Test: claim_v2_rom ...[2024-07-15 17:22:09.779033] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:13.970 [2024-07-15 17:22:09.779060] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8708:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:13.970 [2024-07-15 17:22:09.779070] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:13.970 [2024-07-15 17:22:09.779080] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:13.970 [2024-07-15 17:22:09.779088] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:13.970 [2024-07-15 17:22:09.779099] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8704:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:03:13.970 [2024-07-15 17:22:09.779127] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:13.970 [2024-07-15 17:22:09.779137] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:13.970 [2024-07-15 17:22:09.779146] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:13.970 [2024-07-15 17:22:09.779154] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:13.970 passed 00:03:13.970 Test: claim_v2_rwm ...passed 00:03:13.970 Test: claim_v2_existing_writer ...passed 00:03:13.970 Test: claim_v2_existing_v1 ...passed 00:03:13.970 Test: claim_v1_existing_v2 ...passed 00:03:13.970 Test: examine_claimed ...passed[2024-07-15 17:22:09.779165] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8746:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:03:13.970 [2024-07-15 17:22:09.779175] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8742:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:03:13.970 [2024-07-15 17:22:09.779196] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8777:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:03:13.970 [2024-07-15 17:22:09.779207] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:13.970 [2024-07-15 17:22:09.779216] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:13.970 [2024-07-15 17:22:09.779225] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:13.970 [2024-07-15 17:22:09.779233] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:13.970 [2024-07-15 17:22:09.779242] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8796:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:03:13.970 [2024-07-15 17:22:09.779253] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8777:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:03:13.970 [2024-07-15 17:22:09.779275] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8742:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:03:13.970 [2024-07-15 17:22:09.779283] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8742:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:03:13.970 [2024-07-15 17:22:09.779304] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:03:13.970 [2024-07-15 17:22:09.779312] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:03:13.970 [2024-07-15 17:22:09.779321] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:03:13.971 [2024-07-15 17:22:09.779341] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:13.971 [2024-07-15 17:22:09.779351] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:13.971 [2024-07-15 17:22:09.779361] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:13.971 [2024-07-15 17:22:09.779399] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:03:13.971 00:03:13.971 00:03:13.971 Run Summary: Type Total Ran Passed Failed Inactive 00:03:13.971 suites 1 1 n/a 0 0 00:03:13.971 tests 59 59 59 0 0 00:03:13.971 asserts 4599 4599 4599 0 n/a 00:03:13.971 00:03:13.971 Elapsed time = 0.062 seconds 00:03:13.971 17:22:09 unittest.unittest_bdev -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:03:13.971 00:03:13.971 00:03:13.971 CUnit - A unit testing framework for C - Version 2.1-3 00:03:13.971 http://cunit.sourceforge.net/ 00:03:13.971 00:03:13.971 00:03:13.971 Suite: nvme 00:03:13.971 Test: test_create_ctrlr ...passed 00:03:13.971 Test: test_reset_ctrlr ...[2024-07-15 17:22:09.789546] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:13.971 passed 00:03:13.971 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:03:13.971 Test: test_failover_ctrlr ...passed 00:03:13.971 Test: test_race_between_failover_and_add_secondary_trid ...passed 00:03:13.971 Test: test_pending_reset ...[2024-07-15 17:22:09.790618] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:13.971 [2024-07-15 17:22:09.790684] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:13.971 [2024-07-15 17:22:09.790727] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:13.971 passed 00:03:13.971 Test: test_attach_ctrlr ...passed 00:03:13.971 Test: test_aer_cb ...passed 00:03:13.971 Test: test_submit_nvme_cmd ...passed 00:03:13.971 Test: test_add_remove_trid ...[2024-07-15 17:22:09.790958] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:13.971 [2024-07-15 17:22:09.791005] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:13.971 [2024-07-15 17:22:09.791125] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:03:13.971 passed 00:03:13.971 Test: test_abort ...passed 00:03:13.971 Test: test_get_io_qpair ...passed 00:03:13.971 Test: test_bdev_unregister ...passed 00:03:13.971 Test: test_compare_ns ...passed 00:03:13.971 Test: test_init_ana_log_page ...[2024-07-15 17:22:09.791542] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7452:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:03:13.971 passed 00:03:13.971 Test: test_get_memory_domains ...passed 00:03:13.971 Test: test_reconnect_qpair ...passed 00:03:13.971 Test: test_create_bdev_ctrlr ...passed 00:03:13.971 Test: test_add_multi_ns_to_bdev ...[2024-07-15 17:22:09.791954] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:13.971 [2024-07-15 17:22:09.792051] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5382:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:03:13.971 [2024-07-15 17:22:09.792283] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4573:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:03:13.971 passed 00:03:13.971 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:03:13.971 Test: test_admin_path ...passed 00:03:13.971 Test: test_reset_bdev_ctrlr ...passed 00:03:13.971 Test: test_find_io_path ...passed 00:03:13.971 Test: test_retry_io_if_ana_state_is_updating ...passed 00:03:13.971 Test: test_retry_io_for_io_path_error ...passed 00:03:13.971 Test: test_retry_io_count ...passed 00:03:13.971 Test: test_concurrent_read_ana_log_page ...passed 00:03:13.971 Test: test_retry_io_for_ana_error ...passed 00:03:13.971 Test: test_check_io_error_resiliency_params ...[2024-07-15 17:22:09.793252] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6076:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:03:13.971 [2024-07-15 17:22:09.793282] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6080:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:03:13.971 [2024-07-15 17:22:09.793309] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6089:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:03:13.971 [2024-07-15 17:22:09.793333] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6092:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:03:13.971 [2024-07-15 17:22:09.793359] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6104:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:03:13.971 [2024-07-15 17:22:09.793379] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6104:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:03:13.971 passed 00:03:13.971 Test: test_retry_io_if_ctrlr_is_resetting ...[2024-07-15 17:22:09.793393] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6084:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:03:13.971 [2024-07-15 17:22:09.793407] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6099:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:03:13.971 [2024-07-15 17:22:09.793420] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6096:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:03:13.971 passed 00:03:13.971 Test: test_reconnect_ctrlr ...[2024-07-15 17:22:09.793575] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:13.971 [2024-07-15 17:22:09.793608] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:13.971 [2024-07-15 17:22:09.793678] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:13.971 [2024-07-15 17:22:09.793724] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:13.971 passed 00:03:13.971 Test: test_retry_failover_ctrlr ...[2024-07-15 17:22:09.793765] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:13.971 [2024-07-15 17:22:09.793872] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:13.971 passed 00:03:13.971 Test: test_fail_path ...[2024-07-15 17:22:09.793976] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:13.971 [2024-07-15 17:22:09.794019] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:13.971 [2024-07-15 17:22:09.794059] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:13.971 [2024-07-15 17:22:09.794091] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:13.971 [2024-07-15 17:22:09.794158] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:13.971 passed 00:03:13.971 Test: test_nvme_ns_cmp ...passed 00:03:13.971 Test: test_ana_transition ...passed 00:03:13.971 Test: test_set_preferred_path ...passed 00:03:13.971 Test: test_find_next_io_path ...passed 00:03:13.971 Test: test_find_io_path_min_qd ...passed 00:03:13.971 Test: test_disable_auto_failback ...[2024-07-15 17:22:09.794497] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:13.971 passed 00:03:13.971 Test: test_set_multipath_policy ...passed 00:03:14.231 Test: test_uuid_generation ...passed 00:03:14.231 Test: test_retry_io_to_same_path ...passed 00:03:14.231 Test: test_race_between_reset_and_disconnected ...passed 00:03:14.231 Test: test_ctrlr_op_rpc ...passed 00:03:14.231 Test: test_bdev_ctrlr_op_rpc ...passed 00:03:14.231 Test: test_disable_enable_ctrlr ...passed 00:03:14.231 Test: test_delete_ctrlr_done ...passed 00:03:14.231 Test: test_ns_remove_during_reset ...passed 00:03:14.231 Test: test_io_path_is_current ...passed 00:03:14.231 00:03:14.231 Run Summary: Type Total Ran Passed Failed Inactive 00:03:14.231 suites 1 1 n/a 0 0 00:03:14.231 tests 49 49 49 0 0 00:03:14.231 asserts 3577 3577 3577 0 n/a 00:03:14.231 00:03:14.231 Elapsed time = 0.016 seconds 00:03:14.231 [2024-07-15 17:22:09.819010] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:14.231 [2024-07-15 17:22:09.819064] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:14.231 17:22:09 unittest.unittest_bdev -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:03:14.231 00:03:14.231 00:03:14.231 CUnit - A unit testing framework for C - Version 2.1-3 00:03:14.231 http://cunit.sourceforge.net/ 00:03:14.231 00:03:14.231 Test Options 00:03:14.231 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:03:14.231 00:03:14.231 Suite: raid 00:03:14.231 Test: test_create_raid ...passed 00:03:14.231 Test: test_create_raid_superblock ...passed 00:03:14.231 Test: test_delete_raid ...passed 00:03:14.231 Test: test_create_raid_invalid_args ...[2024-07-15 17:22:09.828652] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1481:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:03:14.231 [2024-07-15 17:22:09.828884] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1475:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:03:14.231 [2024-07-15 17:22:09.829019] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1465:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:03:14.231 [2024-07-15 17:22:09.829080] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3193:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:03:14.231 [2024-07-15 17:22:09.829102] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3369:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:03:14.231 [2024-07-15 17:22:09.829322] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3193:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:03:14.231 [2024-07-15 17:22:09.829343] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3369:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:03:14.231 passed 00:03:14.231 Test: test_delete_raid_invalid_args ...passed 00:03:14.231 Test: test_io_channel ...passed 00:03:14.231 Test: test_reset_io ...passed 00:03:14.231 Test: test_multi_raid ...passed 00:03:14.231 Test: test_io_type_supported ...passed 00:03:14.231 Test: test_raid_json_dump_info ...passed 00:03:14.231 Test: test_context_size ...passed 00:03:14.231 Test: test_raid_level_conversions ...passed 00:03:14.231 Test: test_raid_io_split ...passed 00:03:14.231 Test: test_raid_process ...passed 00:03:14.231 00:03:14.231 Run Summary: Type Total Ran Passed Failed Inactive 00:03:14.231 suites 1 1 n/a 0 0 00:03:14.231 tests 14 14 14 0 0 00:03:14.231 asserts 6183 6183 6183 0 n/a 00:03:14.231 00:03:14.231 Elapsed time = 0.008 seconds 00:03:14.232 17:22:09 unittest.unittest_bdev -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:03:14.232 00:03:14.232 00:03:14.232 CUnit - A unit testing framework for C - Version 2.1-3 00:03:14.232 http://cunit.sourceforge.net/ 00:03:14.232 00:03:14.232 00:03:14.232 Suite: raid_sb 00:03:14.232 Test: test_raid_bdev_write_superblock ...passed 00:03:14.232 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:03:14.232 Test: test_raid_bdev_parse_superblock ...[2024-07-15 17:22:09.836723] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:03:14.232 passed 00:03:14.232 Suite: raid_sb_md 00:03:14.232 Test: test_raid_bdev_write_superblock ...passed 00:03:14.232 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:03:14.232 Test: test_raid_bdev_parse_superblock ...passed 00:03:14.232 Suite: raid_sb_md_interleaved 00:03:14.232 Test: test_raid_bdev_write_superblock ...passed 00:03:14.232 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:03:14.232 Test: test_raid_bdev_parse_superblock ...[2024-07-15 17:22:09.836948] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:03:14.232 [2024-07-15 17:22:09.837108] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:03:14.232 passed 00:03:14.232 00:03:14.232 Run Summary: Type Total Ran Passed Failed Inactive 00:03:14.232 suites 3 3 n/a 0 0 00:03:14.232 tests 9 9 9 0 0 00:03:14.232 asserts 139 139 139 0 n/a 00:03:14.232 00:03:14.232 Elapsed time = 0.000 seconds 00:03:14.232 17:22:09 unittest.unittest_bdev -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:03:14.232 00:03:14.232 00:03:14.232 CUnit - A unit testing framework for C - Version 2.1-3 00:03:14.232 http://cunit.sourceforge.net/ 00:03:14.232 00:03:14.232 00:03:14.232 Suite: concat 00:03:14.232 Test: test_concat_start ...passed 00:03:14.232 Test: test_concat_rw ...passed 00:03:14.232 Test: test_concat_null_payload ...passed 00:03:14.232 00:03:14.232 Run Summary: Type Total Ran Passed Failed Inactive 00:03:14.232 suites 1 1 n/a 0 0 00:03:14.232 tests 3 3 3 0 0 00:03:14.232 asserts 8460 8460 8460 0 n/a 00:03:14.232 00:03:14.232 Elapsed time = 0.000 seconds 00:03:14.232 17:22:09 unittest.unittest_bdev -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid0.c/raid0_ut 00:03:14.232 00:03:14.232 00:03:14.232 CUnit - A unit testing framework for C - Version 2.1-3 00:03:14.232 http://cunit.sourceforge.net/ 00:03:14.232 00:03:14.232 00:03:14.232 Suite: raid0 00:03:14.232 Test: test_write_io ...passed 00:03:14.232 Test: test_read_io ...passed 00:03:14.232 Test: test_unmap_io ...passed 00:03:14.232 Test: test_io_failure ...passed 00:03:14.232 Suite: raid0_dif 00:03:14.232 Test: test_write_io ...passed 00:03:14.232 Test: test_read_io ...passed 00:03:14.232 Test: test_unmap_io ...passed 00:03:14.232 Test: test_io_failure ...passed 00:03:14.232 00:03:14.232 Run Summary: Type Total Ran Passed Failed Inactive 00:03:14.232 suites 2 2 n/a 0 0 00:03:14.232 tests 8 8 8 0 0 00:03:14.232 asserts 368291 368291 368291 0 n/a 00:03:14.232 00:03:14.232 Elapsed time = 0.008 seconds 00:03:14.232 17:22:09 unittest.unittest_bdev -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:03:14.232 00:03:14.232 00:03:14.232 CUnit - A unit testing framework for C - Version 2.1-3 00:03:14.232 http://cunit.sourceforge.net/ 00:03:14.232 00:03:14.232 00:03:14.232 Suite: raid1 00:03:14.232 Test: test_raid1_start ...passed 00:03:14.232 Test: test_raid1_read_balancing ...passed 00:03:14.232 Test: test_raid1_write_error ...passed 00:03:14.232 Test: test_raid1_read_error ...passed 00:03:14.232 00:03:14.232 Run Summary: Type Total Ran Passed Failed Inactive 00:03:14.232 suites 1 1 n/a 0 0 00:03:14.232 tests 4 4 4 0 0 00:03:14.232 asserts 4374 4374 4374 0 n/a 00:03:14.232 00:03:14.232 Elapsed time = 0.000 seconds 00:03:14.232 17:22:09 unittest.unittest_bdev -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:03:14.232 00:03:14.232 00:03:14.232 CUnit - A unit testing framework for C - Version 2.1-3 00:03:14.232 http://cunit.sourceforge.net/ 00:03:14.232 00:03:14.232 00:03:14.232 Suite: zone 00:03:14.232 Test: test_zone_get_operation ...passed 00:03:14.232 Test: test_bdev_zone_get_info ...passed 00:03:14.232 Test: test_bdev_zone_management ...passed 00:03:14.232 Test: test_bdev_zone_append ...passed 00:03:14.232 Test: test_bdev_zone_append_with_md ...passed 00:03:14.232 Test: test_bdev_zone_appendv ...passed 00:03:14.232 Test: test_bdev_zone_appendv_with_md ...passed 00:03:14.232 Test: test_bdev_io_get_append_location ...passed 00:03:14.232 00:03:14.232 Run Summary: Type Total Ran Passed Failed Inactive 00:03:14.232 suites 1 1 n/a 0 0 00:03:14.232 tests 8 8 8 0 0 00:03:14.232 asserts 94 94 94 0 n/a 00:03:14.232 00:03:14.232 Elapsed time = 0.000 seconds 00:03:14.232 17:22:09 unittest.unittest_bdev -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:03:14.232 00:03:14.232 00:03:14.232 CUnit - A unit testing framework for C - Version 2.1-3 00:03:14.232 http://cunit.sourceforge.net/ 00:03:14.232 00:03:14.232 00:03:14.232 Suite: gpt_parse 00:03:14.232 Test: test_parse_mbr_and_primary ...[2024-07-15 17:22:09.874208] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:03:14.232 [2024-07-15 17:22:09.874765] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:03:14.232 [2024-07-15 17:22:09.874806] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:03:14.232 [2024-07-15 17:22:09.874819] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:03:14.232 [2024-07-15 17:22:09.874836] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:03:14.232 [2024-07-15 17:22:09.874846] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:03:14.232 passed 00:03:14.232 Test: test_parse_secondary ...[2024-07-15 17:22:09.875373] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:03:14.232 [2024-07-15 17:22:09.875390] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:03:14.232 [2024-07-15 17:22:09.875402] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:03:14.232 [2024-07-15 17:22:09.875445] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:03:14.232 passed 00:03:14.232 Test: test_check_mbr ...passed 00:03:14.232 Test: test_read_header ...[2024-07-15 17:22:09.876177] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:03:14.232 [2024-07-15 17:22:09.876209] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:03:14.232 [2024-07-15 17:22:09.876228] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:03:14.232 [2024-07-15 17:22:09.876240] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 178:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:03:14.232 [2024-07-15 17:22:09.876251] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:03:14.232 passed 00:03:14.232 Test: test_read_partitions ...[2024-07-15 17:22:09.876262] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 192:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:03:14.232 [2024-07-15 17:22:09.876274] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 136:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:03:14.232 [2024-07-15 17:22:09.876283] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:03:14.232 [2024-07-15 17:22:09.876299] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:03:14.232 [2024-07-15 17:22:09.876309] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 96:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:03:14.232 [2024-07-15 17:22:09.876319] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:03:14.232 [2024-07-15 17:22:09.876328] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:03:14.232 [2024-07-15 17:22:09.876945] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:03:14.232 passed 00:03:14.232 00:03:14.232 Run Summary: Type Total Ran Passed Failed Inactive 00:03:14.232 suites 1 1 n/a 0 0 00:03:14.232 tests 5 5 5 0 0 00:03:14.232 asserts 33 33 33 0 n/a 00:03:14.232 00:03:14.232 Elapsed time = 0.000 seconds 00:03:14.232 17:22:09 unittest.unittest_bdev -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:03:14.232 00:03:14.232 00:03:14.232 CUnit - A unit testing framework for C - Version 2.1-3 00:03:14.232 http://cunit.sourceforge.net/ 00:03:14.232 00:03:14.232 00:03:14.232 Suite: bdev_part 00:03:14.232 Test: part_test ...[2024-07-15 17:22:09.886668] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 935c3562-018a-b155-8857-063637e07548 already exists 00:03:14.232 passed 00:03:14.232 Test: part_free_test ...passed 00:03:14.232 Test: part_get_io_channel_test ...[2024-07-15 17:22:09.886905] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:935c3562-018a-b155-8857-063637e07548 alias for bdev test1 00:03:14.232 passed 00:03:14.232 Test: part_construct_ext ...passed 00:03:14.232 00:03:14.232 Run Summary: Type Total Ran Passed Failed Inactive 00:03:14.232 suites 1 1 n/a 0 0 00:03:14.232 tests 4 4 4 0 0 00:03:14.232 asserts 48 48 48 0 n/a 00:03:14.232 00:03:14.232 Elapsed time = 0.000 seconds 00:03:14.233 17:22:09 unittest.unittest_bdev -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:03:14.233 00:03:14.233 00:03:14.233 CUnit - A unit testing framework for C - Version 2.1-3 00:03:14.233 http://cunit.sourceforge.net/ 00:03:14.233 00:03:14.233 00:03:14.233 Suite: scsi_nvme_suite 00:03:14.233 Test: scsi_nvme_translate_test ...passed 00:03:14.233 00:03:14.233 Run Summary: Type Total Ran Passed Failed Inactive 00:03:14.233 suites 1 1 n/a 0 0 00:03:14.233 tests 1 1 1 0 0 00:03:14.233 asserts 104 104 104 0 n/a 00:03:14.233 00:03:14.233 Elapsed time = 0.000 seconds 00:03:14.233 17:22:09 unittest.unittest_bdev -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:03:14.233 00:03:14.233 00:03:14.233 CUnit - A unit testing framework for C - Version 2.1-3 00:03:14.233 http://cunit.sourceforge.net/ 00:03:14.233 00:03:14.233 00:03:14.233 Suite: lvol 00:03:14.233 Test: ut_lvs_init ...[2024-07-15 17:22:09.902354] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:03:14.233 passed 00:03:14.233 Test: ut_lvol_init ...passed 00:03:14.233 Test: ut_lvol_snapshot ...passed 00:03:14.233 Test: ut_lvol_clone ...passed 00:03:14.233 Test: ut_lvs_destroy ...passed 00:03:14.233 Test: ut_lvs_unload ...[2024-07-15 17:22:09.902649] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:03:14.233 passed 00:03:14.233 Test: ut_lvol_resize ...passed 00:03:14.233 Test: ut_lvol_set_read_only ...passed 00:03:14.233 Test: ut_lvol_hotremove ...passed 00:03:14.233 Test: ut_vbdev_lvol_get_io_channel ...passed 00:03:14.233 Test: ut_vbdev_lvol_io_type_supported ...passed 00:03:14.233 Test: ut_lvol_read_write ...passed 00:03:14.233 Test: ut_vbdev_lvol_submit_request ...passed 00:03:14.233 Test: ut_lvol_examine_config ...[2024-07-15 17:22:09.902846] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1394:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:03:14.233 passed 00:03:14.233 Test: ut_lvol_examine_disk ...passed 00:03:14.233 Test: ut_lvol_rename ...passed 00:03:14.233 Test: ut_bdev_finish ...[2024-07-15 17:22:09.902994] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1536:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:03:14.233 [2024-07-15 17:22:09.903082] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:03:14.233 [2024-07-15 17:22:09.903134] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1344:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:03:14.233 passed 00:03:14.233 Test: ut_lvs_rename ...passed 00:03:14.233 Test: ut_lvol_seek ...passed 00:03:14.233 Test: ut_esnap_dev_create ...[2024-07-15 17:22:09.903200] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:03:14.233 [2024-07-15 17:22:09.903225] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1885:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:03:14.233 [2024-07-15 17:22:09.903252] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1890:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:03:14.233 passed 00:03:14.233 Test: ut_lvol_esnap_clone_bad_args ...[2024-07-15 17:22:09.903328] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1280:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:03:14.233 [2024-07-15 17:22:09.903353] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1287:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:03:14.233 passed 00:03:14.233 Test: ut_lvol_shallow_copy ...passed 00:03:14.233 Test: ut_lvol_set_external_parent ...passed 00:03:14.233 00:03:14.233 [2024-07-15 17:22:09.903417] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1977:vbdev_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:03:14.233 [2024-07-15 17:22:09.903446] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1982:vbdev_lvol_shallow_copy: *ERROR*: lvol lvol_sc, bdev name must not be NULL 00:03:14.233 [2024-07-15 17:22:09.903492] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:2037:vbdev_lvol_set_external_parent: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:03:14.233 Run Summary: Type Total Ran Passed Failed Inactive 00:03:14.233 suites 1 1 n/a 0 0 00:03:14.233 tests 23 23 23 0 0 00:03:14.233 asserts 770 770 770 0 n/a 00:03:14.233 00:03:14.233 Elapsed time = 0.000 seconds 00:03:14.233 17:22:09 unittest.unittest_bdev -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:03:14.233 00:03:14.233 00:03:14.233 CUnit - A unit testing framework for C - Version 2.1-3 00:03:14.233 http://cunit.sourceforge.net/ 00:03:14.233 00:03:14.233 00:03:14.233 Suite: zone_block 00:03:14.233 Test: test_zone_block_create ...passed 00:03:14.233 Test: test_zone_block_create_invalid ...[2024-07-15 17:22:09.913570] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:03:14.233 [2024-07-15 17:22:09.913761] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-15 17:22:09.913789] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:03:14.233 [2024-07-15 17:22:09.913803] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-15 17:22:09.913822] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:03:14.233 [2024-07-15 17:22:09.913836] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-15 17:22:09.913848] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:03:14.233 [2024-07-15 17:22:09.913863] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argumentpassed 00:03:14.233 Test: test_get_zone_info ...[2024-07-15 17:22:09.913945] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.233 [2024-07-15 17:22:09.913984] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.233 [2024-07-15 17:22:09.913997] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.233 passed 00:03:14.233 Test: test_supported_io_types ...passed 00:03:14.233 Test: test_reset_zone ...[2024-07-15 17:22:09.914067] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.233 [2024-07-15 17:22:09.914096] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.233 passed 00:03:14.233 Test: test_open_zone ...[2024-07-15 17:22:09.914145] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.233 [2024-07-15 17:22:09.914395] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.233 [2024-07-15 17:22:09.914425] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.233 passed 00:03:14.233 Test: test_zone_write ...[2024-07-15 17:22:09.914465] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:03:14.233 [2024-07-15 17:22:09.914474] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.233 [2024-07-15 17:22:09.914486] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:03:14.233 [2024-07-15 17:22:09.914494] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.233 [2024-07-15 17:22:09.915093] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 402:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:03:14.233 [2024-07-15 17:22:09.915113] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.233 [2024-07-15 17:22:09.915124] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 402:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:03:14.233 [2024-07-15 17:22:09.915132] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.233 passed 00:03:14.233 Test: test_zone_read ...[2024-07-15 17:22:09.915804] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 411:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:03:14.233 [2024-07-15 17:22:09.915828] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.233 [2024-07-15 17:22:09.915864] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:03:14.233 [2024-07-15 17:22:09.915874] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.233 [2024-07-15 17:22:09.915886] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:03:14.233 [2024-07-15 17:22:09.915894] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.233 passed 00:03:14.233 Test: test_close_zone ...passed 00:03:14.233 Test: test_finish_zone ...[2024-07-15 17:22:09.915945] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:03:14.233 [2024-07-15 17:22:09.915998] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.233 [2024-07-15 17:22:09.916030] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.233 [2024-07-15 17:22:09.916045] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.233 [2024-07-15 17:22:09.916087] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.233 [2024-07-15 17:22:09.916109] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.233 [2024-07-15 17:22:09.916178] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.233 [2024-07-15 17:22:09.916220] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.233 passed 00:03:14.233 Test: test_append_zone ...[2024-07-15 17:22:09.916258] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:03:14.233 [2024-07-15 17:22:09.916269] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.233 [2024-07-15 17:22:09.916287] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:03:14.233 [2024-07-15 17:22:09.916300] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.233 [2024-07-15 17:22:09.917484] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 411:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:03:14.233 [2024-07-15 17:22:09.917507] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:14.233 passed 00:03:14.233 00:03:14.233 Run Summary: Type Total Ran Passed Failed Inactive 00:03:14.233 suites 1 1 n/a 0 0 00:03:14.233 tests 11 11 11 0 0 00:03:14.233 asserts 3437 3437 3437 0 n/a 00:03:14.233 00:03:14.233 Elapsed time = 0.008 seconds 00:03:14.233 17:22:09 unittest.unittest_bdev -- unit/unittest.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:03:14.233 00:03:14.233 00:03:14.233 CUnit - A unit testing framework for C - Version 2.1-3 00:03:14.233 http://cunit.sourceforge.net/ 00:03:14.233 00:03:14.233 00:03:14.233 Suite: bdev 00:03:14.234 Test: basic ...[2024-07-15 17:22:09.928791] thread.c:2374:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x24b269): Operation not permitted (rc=-1) 00:03:14.234 [2024-07-15 17:22:09.929063] thread.c:2374:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0xf892046a480 (0x24b260): Operation not permitted (rc=-1) 00:03:14.234 [2024-07-15 17:22:09.929088] thread.c:2374:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x24b269): Operation not permitted (rc=-1) 00:03:14.234 passed 00:03:14.234 Test: unregister_and_close ...passed 00:03:14.234 Test: unregister_and_close_different_threads ...passed 00:03:14.234 Test: basic_qos ...passed 00:03:14.234 Test: put_channel_during_reset ...passed 00:03:14.234 Test: aborted_reset ...passed 00:03:14.234 Test: aborted_reset_no_outstanding_io ...passed 00:03:14.234 Test: io_during_reset ...passed 00:03:14.234 Test: reset_completions ...passed 00:03:14.234 Test: io_during_qos_queue ...passed 00:03:14.234 Test: io_during_qos_reset ...passed 00:03:14.234 Test: enomem ...passed 00:03:14.234 Test: enomem_multi_bdev ...passed 00:03:14.234 Test: enomem_multi_bdev_unregister ...passed 00:03:14.234 Test: enomem_multi_io_target ...passed 00:03:14.234 Test: qos_dynamic_enable ...passed 00:03:14.234 Test: bdev_histograms_mt ...passed 00:03:14.234 Test: bdev_set_io_timeout_mt ...[2024-07-15 17:22:09.964365] thread.c: 471:spdk_thread_lib_fini: *ERROR*: io_device 0xf892046a600 not unregistered 00:03:14.234 passed 00:03:14.234 Test: lock_lba_range_then_submit_io ...[2024-07-15 17:22:09.965426] thread.c:2178:spdk_io_device_register: *ERROR*: io_device 0x24b248 already registered (old:0xf892046a600 new:0xf892046a780) 00:03:14.234 passed 00:03:14.234 Test: unregister_during_reset ...passed 00:03:14.234 Test: event_notify_and_close ...passed 00:03:14.234 Test: unregister_and_qos_poller ...passed 00:03:14.234 Suite: bdev_wrong_thread 00:03:14.234 Test: spdk_bdev_register_wt ...[2024-07-15 17:22:09.971817] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8503:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0xf8920433380 (0xf8920433380) 00:03:14.234 passed 00:03:14.234 Test: spdk_bdev_examine_wt ...passed[2024-07-15 17:22:09.971915] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 811:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0xf8920433380 (0xf8920433380) 00:03:14.234 00:03:14.234 00:03:14.234 Run Summary: Type Total Ran Passed Failed Inactive 00:03:14.234 suites 2 2 n/a 0 0 00:03:14.234 tests 24 24 24 0 0 00:03:14.234 asserts 621 621 621 0 n/a 00:03:14.234 00:03:14.234 Elapsed time = 0.047 seconds 00:03:14.234 00:03:14.234 real 0m0.259s 00:03:14.234 user 0m0.149s 00:03:14.234 sys 0m0.090s 00:03:14.234 17:22:09 unittest.unittest_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:14.234 17:22:09 unittest.unittest_bdev -- common/autotest_common.sh@10 -- # set +x 00:03:14.234 ************************************ 00:03:14.234 END TEST unittest_bdev 00:03:14.234 ************************************ 00:03:14.234 17:22:10 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:14.234 17:22:10 unittest -- unit/unittest.sh@215 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:14.234 17:22:10 unittest -- unit/unittest.sh@220 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:14.234 17:22:10 unittest -- unit/unittest.sh@225 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:14.234 17:22:10 unittest -- unit/unittest.sh@229 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:14.234 17:22:10 unittest -- unit/unittest.sh@233 -- # run_test unittest_blob_blobfs unittest_blob 00:03:14.234 17:22:10 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:14.234 17:22:10 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:14.234 17:22:10 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:14.234 ************************************ 00:03:14.234 START TEST unittest_blob_blobfs 00:03:14.234 ************************************ 00:03:14.234 17:22:10 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1123 -- # unittest_blob 00:03:14.234 17:22:10 unittest.unittest_blob_blobfs -- unit/unittest.sh@39 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:03:14.234 17:22:10 unittest.unittest_blob_blobfs -- unit/unittest.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:03:14.234 00:03:14.234 00:03:14.234 CUnit - A unit testing framework for C - Version 2.1-3 00:03:14.234 http://cunit.sourceforge.net/ 00:03:14.234 00:03:14.234 00:03:14.234 Suite: blob_nocopy_noextent 00:03:14.234 Test: blob_init ...[2024-07-15 17:22:10.040124] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:14.492 passed 00:03:14.493 Test: blob_thin_provision ...passed 00:03:14.493 Test: blob_read_only ...passed 00:03:14.493 Test: bs_load ...[2024-07-15 17:22:10.121564] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:14.493 passed 00:03:14.493 Test: bs_load_custom_cluster_size ...passed 00:03:14.493 Test: bs_load_after_failed_grow ...passed 00:03:14.493 Test: bs_cluster_sz ...[2024-07-15 17:22:10.146098] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:14.493 [2024-07-15 17:22:10.146183] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:14.493 [2024-07-15 17:22:10.146203] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:14.493 passed 00:03:14.493 Test: bs_resize_md ...passed 00:03:14.493 Test: bs_destroy ...passed 00:03:14.493 Test: bs_type ...passed 00:03:14.493 Test: bs_super_block ...passed 00:03:14.493 Test: bs_test_recover_cluster_count ...passed 00:03:14.493 Test: bs_grow_live ...passed 00:03:14.493 Test: bs_grow_live_no_space ...passed 00:03:14.493 Test: bs_test_grow ...passed 00:03:14.493 Test: blob_serialize_test ...passed 00:03:14.493 Test: super_block_crc ...passed 00:03:14.493 Test: blob_thin_prov_write_count_io ...passed 00:03:14.493 Test: blob_thin_prov_unmap_cluster ...passed 00:03:14.493 Test: bs_load_iter_test ...passed 00:03:14.751 Test: blob_relations ...[2024-07-15 17:22:10.324727] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:14.751 [2024-07-15 17:22:10.324806] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:14.751 [2024-07-15 17:22:10.324940] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:14.751 [2024-07-15 17:22:10.324953] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:14.751 passed 00:03:14.751 Test: blob_relations2 ...[2024-07-15 17:22:10.337692] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:14.751 [2024-07-15 17:22:10.337731] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:14.751 [2024-07-15 17:22:10.337741] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:14.751 [2024-07-15 17:22:10.337748] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:14.751 [2024-07-15 17:22:10.337884] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:14.751 [2024-07-15 17:22:10.337895] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:14.751 [2024-07-15 17:22:10.337930] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:14.751 [2024-07-15 17:22:10.337939] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:14.751 passed 00:03:14.751 Test: blob_relations3 ...passed 00:03:14.751 Test: blobstore_clean_power_failure ...passed 00:03:14.751 Test: blob_delete_snapshot_power_failure ...[2024-07-15 17:22:10.504986] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:14.751 [2024-07-15 17:22:10.516391] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:14.751 [2024-07-15 17:22:10.516453] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:14.751 [2024-07-15 17:22:10.516463] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:14.751 [2024-07-15 17:22:10.527803] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:14.751 [2024-07-15 17:22:10.527842] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:14.751 [2024-07-15 17:22:10.527851] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:14.751 [2024-07-15 17:22:10.527859] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:14.751 [2024-07-15 17:22:10.539631] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:03:14.751 [2024-07-15 17:22:10.539674] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:14.751 [2024-07-15 17:22:10.551587] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:03:14.751 [2024-07-15 17:22:10.551643] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:14.751 [2024-07-15 17:22:10.563563] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:03:14.751 [2024-07-15 17:22:10.563615] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:15.009 passed 00:03:15.010 Test: blob_create_snapshot_power_failure ...[2024-07-15 17:22:10.598465] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:15.010 [2024-07-15 17:22:10.621406] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:15.010 [2024-07-15 17:22:10.632715] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:03:15.010 passed 00:03:15.010 Test: blob_io_unit ...passed 00:03:15.010 Test: blob_io_unit_compatibility ...passed 00:03:15.010 Test: blob_ext_md_pages ...passed 00:03:15.010 Test: blob_esnap_io_4096_4096 ...passed 00:03:15.010 Test: blob_esnap_io_512_512 ...passed 00:03:15.010 Test: blob_esnap_io_4096_512 ...passed 00:03:15.010 Test: blob_esnap_io_512_4096 ...passed 00:03:15.010 Test: blob_esnap_clone_resize ...passed 00:03:15.010 Suite: blob_bs_nocopy_noextent 00:03:15.268 Test: blob_open ...passed 00:03:15.268 Test: blob_create ...[2024-07-15 17:22:10.894665] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:03:15.268 passed 00:03:15.268 Test: blob_create_loop ...passed 00:03:15.268 Test: blob_create_fail ...[2024-07-15 17:22:10.982095] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:15.268 passed 00:03:15.268 Test: blob_create_internal ...passed 00:03:15.268 Test: blob_create_zero_extent ...passed 00:03:15.526 Test: blob_snapshot ...passed 00:03:15.526 Test: blob_clone ...passed 00:03:15.526 Test: blob_inflate ...[2024-07-15 17:22:11.168492] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:03:15.526 passed 00:03:15.526 Test: blob_delete ...passed 00:03:15.526 Test: blob_resize_test ...[2024-07-15 17:22:11.234341] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:03:15.526 passed 00:03:15.526 Test: blob_resize_thin_test ...passed 00:03:15.526 Test: channel_ops ...passed 00:03:15.526 Test: blob_super ...passed 00:03:15.783 Test: blob_rw_verify_iov ...passed 00:03:15.783 Test: blob_unmap ...passed 00:03:15.783 Test: blob_iter ...passed 00:03:15.783 Test: blob_parse_md ...passed 00:03:15.783 Test: bs_load_pending_removal ...passed 00:03:15.783 Test: bs_unload ...[2024-07-15 17:22:11.551551] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:15.783 passed 00:03:15.783 Test: bs_usable_clusters ...passed 00:03:16.096 Test: blob_crc ...[2024-07-15 17:22:11.620740] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:16.096 [2024-07-15 17:22:11.620803] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:16.096 passed 00:03:16.096 Test: blob_flags ...passed 00:03:16.096 Test: bs_version ...passed 00:03:16.096 Test: blob_set_xattrs_test ...[2024-07-15 17:22:11.728647] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:16.096 [2024-07-15 17:22:11.728735] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:16.096 passed 00:03:16.096 Test: blob_thin_prov_alloc ...passed 00:03:16.096 Test: blob_insert_cluster_msg_test ...passed 00:03:16.096 Test: blob_thin_prov_rw ...passed 00:03:16.096 Test: blob_thin_prov_rle ...passed 00:03:16.355 Test: blob_thin_prov_rw_iov ...passed 00:03:16.355 Test: blob_snapshot_rw ...passed 00:03:16.355 Test: blob_snapshot_rw_iov ...passed 00:03:16.355 Test: blob_inflate_rw ...passed 00:03:16.355 Test: blob_snapshot_freeze_io ...passed 00:03:16.614 Test: blob_operation_split_rw ...passed 00:03:16.614 Test: blob_operation_split_rw_iov ...passed 00:03:16.614 Test: blob_simultaneous_operations ...[2024-07-15 17:22:12.276857] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:16.614 [2024-07-15 17:22:12.276942] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:16.614 [2024-07-15 17:22:12.277230] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:16.614 [2024-07-15 17:22:12.277241] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:16.614 [2024-07-15 17:22:12.280749] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:16.614 [2024-07-15 17:22:12.280774] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:16.614 [2024-07-15 17:22:12.280792] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:16.614 [2024-07-15 17:22:12.280800] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:16.614 passed 00:03:16.614 Test: blob_persist_test ...passed 00:03:16.614 Test: blob_decouple_snapshot ...passed 00:03:16.614 Test: blob_seek_io_unit ...passed 00:03:16.873 Test: blob_nested_freezes ...passed 00:03:16.873 Test: blob_clone_resize ...passed 00:03:16.873 Test: blob_shallow_copy ...[2024-07-15 17:22:12.512603] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:03:16.873 [2024-07-15 17:22:12.512666] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:03:16.873 [2024-07-15 17:22:12.512678] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:03:16.873 passed 00:03:16.873 Suite: blob_blob_nocopy_noextent 00:03:16.873 Test: blob_write ...passed 00:03:16.873 Test: blob_read ...passed 00:03:16.873 Test: blob_rw_verify ...passed 00:03:16.873 Test: blob_rw_verify_iov_nomem ...passed 00:03:17.132 Test: blob_rw_iov_read_only ...passed 00:03:17.132 Test: blob_xattr ...passed 00:03:17.132 Test: blob_dirty_shutdown ...passed 00:03:17.132 Test: blob_is_degraded ...passed 00:03:17.132 Suite: blob_esnap_bs_nocopy_noextent 00:03:17.132 Test: blob_esnap_create ...passed 00:03:17.132 Test: blob_esnap_thread_add_remove ...passed 00:03:17.132 Test: blob_esnap_clone_snapshot ...passed 00:03:17.132 Test: blob_esnap_clone_inflate ...passed 00:03:17.392 Test: blob_esnap_clone_decouple ...passed 00:03:17.392 Test: blob_esnap_clone_reload ...passed 00:03:17.392 Test: blob_esnap_hotplug ...passed 00:03:17.392 Test: blob_set_parent ...[2024-07-15 17:22:13.092978] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:03:17.392 [2024-07-15 17:22:13.093073] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:03:17.392 [2024-07-15 17:22:13.093112] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:03:17.392 [2024-07-15 17:22:13.093122] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:03:17.392 [2024-07-15 17:22:13.093173] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:17.392 passed 00:03:17.392 Test: blob_set_external_parent ...[2024-07-15 17:22:13.127723] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:03:17.392 [2024-07-15 17:22:13.127783] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:03:17.392 [2024-07-15 17:22:13.127793] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:03:17.392 [2024-07-15 17:22:13.127841] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:17.392 passed 00:03:17.392 Suite: blob_nocopy_extent 00:03:17.392 Test: blob_init ...[2024-07-15 17:22:13.138796] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:17.392 passed 00:03:17.392 Test: blob_thin_provision ...passed 00:03:17.392 Test: blob_read_only ...passed 00:03:17.392 Test: bs_load ...[2024-07-15 17:22:13.185090] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:17.392 passed 00:03:17.392 Test: bs_load_custom_cluster_size ...passed 00:03:17.392 Test: bs_load_after_failed_grow ...passed 00:03:17.392 Test: bs_cluster_sz ...[2024-07-15 17:22:13.208704] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:17.392 [2024-07-15 17:22:13.208782] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:17.392 [2024-07-15 17:22:13.208797] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:17.392 passed 00:03:17.651 Test: bs_resize_md ...passed 00:03:17.651 Test: bs_destroy ...passed 00:03:17.651 Test: bs_type ...passed 00:03:17.651 Test: bs_super_block ...passed 00:03:17.651 Test: bs_test_recover_cluster_count ...passed 00:03:17.651 Test: bs_grow_live ...passed 00:03:17.651 Test: bs_grow_live_no_space ...passed 00:03:17.651 Test: bs_test_grow ...passed 00:03:17.651 Test: blob_serialize_test ...passed 00:03:17.651 Test: super_block_crc ...passed 00:03:17.651 Test: blob_thin_prov_write_count_io ...passed 00:03:17.651 Test: blob_thin_prov_unmap_cluster ...passed 00:03:17.651 Test: bs_load_iter_test ...passed 00:03:17.651 Test: blob_relations ...[2024-07-15 17:22:13.374872] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:17.651 [2024-07-15 17:22:13.374930] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:17.651 [2024-07-15 17:22:13.375050] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:17.651 [2024-07-15 17:22:13.375061] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:17.651 passed 00:03:17.651 Test: blob_relations2 ...[2024-07-15 17:22:13.386477] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:17.651 [2024-07-15 17:22:13.386508] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:17.651 [2024-07-15 17:22:13.386518] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:17.651 [2024-07-15 17:22:13.386525] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:17.651 [2024-07-15 17:22:13.386656] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:17.651 [2024-07-15 17:22:13.386667] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:17.651 [2024-07-15 17:22:13.386705] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:17.651 [2024-07-15 17:22:13.386714] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:17.651 passed 00:03:17.651 Test: blob_relations3 ...passed 00:03:17.911 Test: blobstore_clean_power_failure ...passed 00:03:17.911 Test: blob_delete_snapshot_power_failure ...[2024-07-15 17:22:13.551207] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:17.911 [2024-07-15 17:22:13.562668] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:17.911 [2024-07-15 17:22:13.574215] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:17.911 [2024-07-15 17:22:13.574266] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:17.911 [2024-07-15 17:22:13.574275] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:17.911 [2024-07-15 17:22:13.585630] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:17.911 [2024-07-15 17:22:13.585676] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:17.911 [2024-07-15 17:22:13.585685] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:17.911 [2024-07-15 17:22:13.585693] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:17.911 [2024-07-15 17:22:13.596972] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:17.911 [2024-07-15 17:22:13.597008] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:17.911 [2024-07-15 17:22:13.597016] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:17.911 [2024-07-15 17:22:13.597025] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:17.911 [2024-07-15 17:22:13.608474] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:03:17.911 [2024-07-15 17:22:13.608515] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:17.911 [2024-07-15 17:22:13.619899] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:03:17.911 [2024-07-15 17:22:13.619943] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:17.911 [2024-07-15 17:22:13.631488] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:03:17.911 [2024-07-15 17:22:13.631529] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:17.911 passed 00:03:17.911 Test: blob_create_snapshot_power_failure ...[2024-07-15 17:22:13.666718] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:17.911 [2024-07-15 17:22:13.678091] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:17.911 [2024-07-15 17:22:13.701377] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:17.911 [2024-07-15 17:22:13.712547] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:03:18.171 passed 00:03:18.171 Test: blob_io_unit ...passed 00:03:18.171 Test: blob_io_unit_compatibility ...passed 00:03:18.171 Test: blob_ext_md_pages ...passed 00:03:18.171 Test: blob_esnap_io_4096_4096 ...passed 00:03:18.171 Test: blob_esnap_io_512_512 ...passed 00:03:18.171 Test: blob_esnap_io_4096_512 ...passed 00:03:18.171 Test: blob_esnap_io_512_4096 ...passed 00:03:18.171 Test: blob_esnap_clone_resize ...passed 00:03:18.171 Suite: blob_bs_nocopy_extent 00:03:18.171 Test: blob_open ...passed 00:03:18.171 Test: blob_create ...[2024-07-15 17:22:13.971965] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:03:18.171 passed 00:03:18.439 Test: blob_create_loop ...passed 00:03:18.440 Test: blob_create_fail ...[2024-07-15 17:22:14.061498] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:18.440 passed 00:03:18.440 Test: blob_create_internal ...passed 00:03:18.440 Test: blob_create_zero_extent ...passed 00:03:18.440 Test: blob_snapshot ...passed 00:03:18.440 Test: blob_clone ...passed 00:03:18.440 Test: blob_inflate ...[2024-07-15 17:22:14.247919] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:03:18.440 passed 00:03:18.701 Test: blob_delete ...passed 00:03:18.701 Test: blob_resize_test ...[2024-07-15 17:22:14.317494] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:03:18.701 passed 00:03:18.701 Test: blob_resize_thin_test ...passed 00:03:18.701 Test: channel_ops ...passed 00:03:18.701 Test: blob_super ...passed 00:03:18.701 Test: blob_rw_verify_iov ...passed 00:03:18.701 Test: blob_unmap ...passed 00:03:18.959 Test: blob_iter ...passed 00:03:18.959 Test: blob_parse_md ...passed 00:03:18.959 Test: bs_load_pending_removal ...passed 00:03:18.959 Test: bs_unload ...[2024-07-15 17:22:14.631723] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:18.959 passed 00:03:18.959 Test: bs_usable_clusters ...passed 00:03:18.959 Test: blob_crc ...[2024-07-15 17:22:14.701521] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:18.959 [2024-07-15 17:22:14.701595] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:18.959 passed 00:03:18.959 Test: blob_flags ...passed 00:03:18.959 Test: bs_version ...passed 00:03:19.218 Test: blob_set_xattrs_test ...[2024-07-15 17:22:14.806443] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:19.218 [2024-07-15 17:22:14.806498] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:19.218 passed 00:03:19.218 Test: blob_thin_prov_alloc ...passed 00:03:19.218 Test: blob_insert_cluster_msg_test ...passed 00:03:19.218 Test: blob_thin_prov_rw ...passed 00:03:19.218 Test: blob_thin_prov_rle ...passed 00:03:19.218 Test: blob_thin_prov_rw_iov ...passed 00:03:19.218 Test: blob_snapshot_rw ...passed 00:03:19.476 Test: blob_snapshot_rw_iov ...passed 00:03:19.476 Test: blob_inflate_rw ...passed 00:03:19.476 Test: blob_snapshot_freeze_io ...passed 00:03:19.476 Test: blob_operation_split_rw ...passed 00:03:19.476 Test: blob_operation_split_rw_iov ...passed 00:03:19.738 Test: blob_simultaneous_operations ...[2024-07-15 17:22:15.331175] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:19.738 [2024-07-15 17:22:15.331245] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:19.738 [2024-07-15 17:22:15.331537] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:19.738 [2024-07-15 17:22:15.331548] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:19.738 [2024-07-15 17:22:15.335181] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:19.738 [2024-07-15 17:22:15.335204] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:19.738 [2024-07-15 17:22:15.335223] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:19.738 [2024-07-15 17:22:15.335231] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:19.738 passed 00:03:19.738 Test: blob_persist_test ...passed 00:03:19.738 Test: blob_decouple_snapshot ...passed 00:03:19.738 Test: blob_seek_io_unit ...passed 00:03:19.738 Test: blob_nested_freezes ...passed 00:03:19.738 Test: blob_clone_resize ...passed 00:03:19.738 Test: blob_shallow_copy ...[2024-07-15 17:22:15.561473] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:03:19.738 [2024-07-15 17:22:15.561547] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:03:19.738 [2024-07-15 17:22:15.561559] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:03:20.018 passed 00:03:20.018 Suite: blob_blob_nocopy_extent 00:03:20.018 Test: blob_write ...passed 00:03:20.018 Test: blob_read ...passed 00:03:20.018 Test: blob_rw_verify ...passed 00:03:20.018 Test: blob_rw_verify_iov_nomem ...passed 00:03:20.018 Test: blob_rw_iov_read_only ...passed 00:03:20.019 Test: blob_xattr ...passed 00:03:20.019 Test: blob_dirty_shutdown ...passed 00:03:20.287 Test: blob_is_degraded ...passed 00:03:20.287 Suite: blob_esnap_bs_nocopy_extent 00:03:20.287 Test: blob_esnap_create ...passed 00:03:20.287 Test: blob_esnap_thread_add_remove ...passed 00:03:20.287 Test: blob_esnap_clone_snapshot ...passed 00:03:20.287 Test: blob_esnap_clone_inflate ...passed 00:03:20.287 Test: blob_esnap_clone_decouple ...passed 00:03:20.287 Test: blob_esnap_clone_reload ...passed 00:03:20.287 Test: blob_esnap_hotplug ...passed 00:03:20.546 Test: blob_set_parent ...[2024-07-15 17:22:16.131149] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:03:20.546 [2024-07-15 17:22:16.131223] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:03:20.546 [2024-07-15 17:22:16.131248] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:03:20.546 [2024-07-15 17:22:16.131258] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:03:20.546 [2024-07-15 17:22:16.131320] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:20.546 passed 00:03:20.546 Test: blob_set_external_parent ...[2024-07-15 17:22:16.166249] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:03:20.546 [2024-07-15 17:22:16.166307] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:03:20.546 [2024-07-15 17:22:16.166317] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:03:20.546 [2024-07-15 17:22:16.166367] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:20.546 passed 00:03:20.546 Suite: blob_copy_noextent 00:03:20.546 Test: blob_init ...[2024-07-15 17:22:16.178099] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:20.546 passed 00:03:20.546 Test: blob_thin_provision ...passed 00:03:20.546 Test: blob_read_only ...passed 00:03:20.546 Test: bs_load ...[2024-07-15 17:22:16.225170] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:20.546 passed 00:03:20.546 Test: bs_load_custom_cluster_size ...passed 00:03:20.546 Test: bs_load_after_failed_grow ...passed 00:03:20.546 Test: bs_cluster_sz ...[2024-07-15 17:22:16.248988] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:20.546 [2024-07-15 17:22:16.249065] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:20.546 [2024-07-15 17:22:16.249083] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:20.546 passed 00:03:20.546 Test: bs_resize_md ...passed 00:03:20.546 Test: bs_destroy ...passed 00:03:20.546 Test: bs_type ...passed 00:03:20.546 Test: bs_super_block ...passed 00:03:20.546 Test: bs_test_recover_cluster_count ...passed 00:03:20.546 Test: bs_grow_live ...passed 00:03:20.546 Test: bs_grow_live_no_space ...passed 00:03:20.546 Test: bs_test_grow ...passed 00:03:20.546 Test: blob_serialize_test ...passed 00:03:20.546 Test: super_block_crc ...passed 00:03:20.546 Test: blob_thin_prov_write_count_io ...passed 00:03:20.804 Test: blob_thin_prov_unmap_cluster ...passed 00:03:20.804 Test: bs_load_iter_test ...passed 00:03:20.804 Test: blob_relations ...[2024-07-15 17:22:16.416491] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:20.804 [2024-07-15 17:22:16.416555] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:20.804 [2024-07-15 17:22:16.416663] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:20.804 [2024-07-15 17:22:16.416675] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:20.804 passed 00:03:20.804 Test: blob_relations2 ...[2024-07-15 17:22:16.428731] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:20.804 [2024-07-15 17:22:16.428772] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:20.804 [2024-07-15 17:22:16.428783] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:20.804 [2024-07-15 17:22:16.428790] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:20.804 [2024-07-15 17:22:16.428918] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:20.804 [2024-07-15 17:22:16.428929] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:20.804 [2024-07-15 17:22:16.428963] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:20.804 [2024-07-15 17:22:16.428971] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:20.804 passed 00:03:20.804 Test: blob_relations3 ...passed 00:03:20.804 Test: blobstore_clean_power_failure ...passed 00:03:20.804 Test: blob_delete_snapshot_power_failure ...[2024-07-15 17:22:16.590970] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:20.804 [2024-07-15 17:22:16.602603] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:20.804 [2024-07-15 17:22:16.602657] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:20.804 [2024-07-15 17:22:16.602667] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:20.804 [2024-07-15 17:22:16.614418] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:20.804 [2024-07-15 17:22:16.614456] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:20.804 [2024-07-15 17:22:16.614464] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:20.804 [2024-07-15 17:22:16.614472] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:20.804 [2024-07-15 17:22:16.626045] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:03:20.804 [2024-07-15 17:22:16.626085] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:21.063 [2024-07-15 17:22:16.637671] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:03:21.063 [2024-07-15 17:22:16.637723] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:21.063 [2024-07-15 17:22:16.649351] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:03:21.063 [2024-07-15 17:22:16.649397] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:21.063 passed 00:03:21.063 Test: blob_create_snapshot_power_failure ...[2024-07-15 17:22:16.684409] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:21.063 [2024-07-15 17:22:16.707534] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:21.063 [2024-07-15 17:22:16.719250] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:03:21.063 passed 00:03:21.063 Test: blob_io_unit ...passed 00:03:21.063 Test: blob_io_unit_compatibility ...passed 00:03:21.063 Test: blob_ext_md_pages ...passed 00:03:21.063 Test: blob_esnap_io_4096_4096 ...passed 00:03:21.063 Test: blob_esnap_io_512_512 ...passed 00:03:21.063 Test: blob_esnap_io_4096_512 ...passed 00:03:21.063 Test: blob_esnap_io_512_4096 ...passed 00:03:21.321 Test: blob_esnap_clone_resize ...passed 00:03:21.321 Suite: blob_bs_copy_noextent 00:03:21.321 Test: blob_open ...passed 00:03:21.321 Test: blob_create ...[2024-07-15 17:22:16.965066] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:03:21.321 passed 00:03:21.321 Test: blob_create_loop ...passed 00:03:21.321 Test: blob_create_fail ...[2024-07-15 17:22:17.049530] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:21.321 passed 00:03:21.321 Test: blob_create_internal ...passed 00:03:21.321 Test: blob_create_zero_extent ...passed 00:03:21.579 Test: blob_snapshot ...passed 00:03:21.579 Test: blob_clone ...passed 00:03:21.579 Test: blob_inflate ...[2024-07-15 17:22:17.226721] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:03:21.579 passed 00:03:21.579 Test: blob_delete ...passed 00:03:21.579 Test: blob_resize_test ...[2024-07-15 17:22:17.293451] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:03:21.579 passed 00:03:21.579 Test: blob_resize_thin_test ...passed 00:03:21.579 Test: channel_ops ...passed 00:03:21.837 Test: blob_super ...passed 00:03:21.837 Test: blob_rw_verify_iov ...passed 00:03:21.837 Test: blob_unmap ...passed 00:03:21.837 Test: blob_iter ...passed 00:03:21.837 Test: blob_parse_md ...passed 00:03:21.838 Test: bs_load_pending_removal ...passed 00:03:21.838 Test: bs_unload ...[2024-07-15 17:22:17.615331] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:21.838 passed 00:03:21.838 Test: bs_usable_clusters ...passed 00:03:22.095 Test: blob_crc ...[2024-07-15 17:22:17.685501] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:22.095 [2024-07-15 17:22:17.685562] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:22.095 passed 00:03:22.095 Test: blob_flags ...passed 00:03:22.095 Test: bs_version ...passed 00:03:22.095 Test: blob_set_xattrs_test ...[2024-07-15 17:22:17.791596] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:22.095 [2024-07-15 17:22:17.791688] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:22.095 passed 00:03:22.095 Test: blob_thin_prov_alloc ...passed 00:03:22.095 Test: blob_insert_cluster_msg_test ...passed 00:03:22.095 Test: blob_thin_prov_rw ...passed 00:03:22.353 Test: blob_thin_prov_rle ...passed 00:03:22.353 Test: blob_thin_prov_rw_iov ...passed 00:03:22.353 Test: blob_snapshot_rw ...passed 00:03:22.353 Test: blob_snapshot_rw_iov ...passed 00:03:22.353 Test: blob_inflate_rw ...passed 00:03:22.353 Test: blob_snapshot_freeze_io ...passed 00:03:22.611 Test: blob_operation_split_rw ...passed 00:03:22.611 Test: blob_operation_split_rw_iov ...passed 00:03:22.611 Test: blob_simultaneous_operations ...[2024-07-15 17:22:18.323274] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:22.611 [2024-07-15 17:22:18.323335] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:22.611 [2024-07-15 17:22:18.323623] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:22.611 [2024-07-15 17:22:18.323634] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:22.611 [2024-07-15 17:22:18.325965] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:22.611 [2024-07-15 17:22:18.325985] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:22.611 [2024-07-15 17:22:18.326002] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:22.611 [2024-07-15 17:22:18.326009] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:22.611 passed 00:03:22.611 Test: blob_persist_test ...passed 00:03:22.611 Test: blob_decouple_snapshot ...passed 00:03:22.869 Test: blob_seek_io_unit ...passed 00:03:22.869 Test: blob_nested_freezes ...passed 00:03:22.869 Test: blob_clone_resize ...passed 00:03:22.869 Test: blob_shallow_copy ...[2024-07-15 17:22:18.553836] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:03:22.869 [2024-07-15 17:22:18.553910] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:03:22.869 [2024-07-15 17:22:18.553922] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:03:22.869 passed 00:03:22.869 Suite: blob_blob_copy_noextent 00:03:22.869 Test: blob_write ...passed 00:03:22.869 Test: blob_read ...passed 00:03:22.869 Test: blob_rw_verify ...passed 00:03:23.128 Test: blob_rw_verify_iov_nomem ...passed 00:03:23.128 Test: blob_rw_iov_read_only ...passed 00:03:23.128 Test: blob_xattr ...passed 00:03:23.128 Test: blob_dirty_shutdown ...passed 00:03:23.128 Test: blob_is_degraded ...passed 00:03:23.128 Suite: blob_esnap_bs_copy_noextent 00:03:23.128 Test: blob_esnap_create ...passed 00:03:23.128 Test: blob_esnap_thread_add_remove ...passed 00:03:23.128 Test: blob_esnap_clone_snapshot ...passed 00:03:23.386 Test: blob_esnap_clone_inflate ...passed 00:03:23.386 Test: blob_esnap_clone_decouple ...passed 00:03:23.386 Test: blob_esnap_clone_reload ...passed 00:03:23.386 Test: blob_esnap_hotplug ...passed 00:03:23.386 Test: blob_set_parent ...[2024-07-15 17:22:19.114139] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:03:23.386 [2024-07-15 17:22:19.114204] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:03:23.386 [2024-07-15 17:22:19.114227] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:03:23.386 [2024-07-15 17:22:19.114238] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:03:23.386 [2024-07-15 17:22:19.114290] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:23.386 passed 00:03:23.386 Test: blob_set_external_parent ...[2024-07-15 17:22:19.148847] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:03:23.386 [2024-07-15 17:22:19.148901] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:03:23.386 [2024-07-15 17:22:19.148911] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:03:23.386 [2024-07-15 17:22:19.148962] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:23.386 passed 00:03:23.386 Suite: blob_copy_extent 00:03:23.386 Test: blob_init ...[2024-07-15 17:22:19.160412] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:23.386 passed 00:03:23.386 Test: blob_thin_provision ...passed 00:03:23.386 Test: blob_read_only ...passed 00:03:23.386 Test: bs_load ...[2024-07-15 17:22:19.207293] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:23.386 passed 00:03:23.658 Test: bs_load_custom_cluster_size ...passed 00:03:23.658 Test: bs_load_after_failed_grow ...passed 00:03:23.658 Test: bs_cluster_sz ...[2024-07-15 17:22:19.230701] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:23.658 [2024-07-15 17:22:19.230778] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:23.658 [2024-07-15 17:22:19.230794] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:23.658 passed 00:03:23.658 Test: bs_resize_md ...passed 00:03:23.658 Test: bs_destroy ...passed 00:03:23.658 Test: bs_type ...passed 00:03:23.658 Test: bs_super_block ...passed 00:03:23.658 Test: bs_test_recover_cluster_count ...passed 00:03:23.658 Test: bs_grow_live ...passed 00:03:23.658 Test: bs_grow_live_no_space ...passed 00:03:23.658 Test: bs_test_grow ...passed 00:03:23.658 Test: blob_serialize_test ...passed 00:03:23.658 Test: super_block_crc ...passed 00:03:23.658 Test: blob_thin_prov_write_count_io ...passed 00:03:23.658 Test: blob_thin_prov_unmap_cluster ...passed 00:03:23.658 Test: bs_load_iter_test ...passed 00:03:23.658 Test: blob_relations ...[2024-07-15 17:22:19.393583] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:23.658 [2024-07-15 17:22:19.393638] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:23.658 [2024-07-15 17:22:19.393754] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:23.658 [2024-07-15 17:22:19.393767] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:23.658 passed 00:03:23.658 Test: blob_relations2 ...[2024-07-15 17:22:19.405704] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:23.658 [2024-07-15 17:22:19.405736] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:23.658 [2024-07-15 17:22:19.405745] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:23.658 [2024-07-15 17:22:19.405752] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:23.658 [2024-07-15 17:22:19.405887] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:23.658 [2024-07-15 17:22:19.405899] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:23.658 [2024-07-15 17:22:19.405935] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:23.658 [2024-07-15 17:22:19.405944] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:23.658 passed 00:03:23.658 Test: blob_relations3 ...passed 00:03:23.918 Test: blobstore_clean_power_failure ...passed 00:03:23.918 Test: blob_delete_snapshot_power_failure ...[2024-07-15 17:22:19.566393] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:23.918 [2024-07-15 17:22:19.578063] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:23.918 [2024-07-15 17:22:19.589723] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:23.918 [2024-07-15 17:22:19.589777] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:23.918 [2024-07-15 17:22:19.589787] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:23.918 [2024-07-15 17:22:19.601478] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:23.918 [2024-07-15 17:22:19.601516] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:23.918 [2024-07-15 17:22:19.601525] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:23.918 [2024-07-15 17:22:19.601533] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:23.918 [2024-07-15 17:22:19.613177] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:23.918 [2024-07-15 17:22:19.613216] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:23.918 [2024-07-15 17:22:19.613225] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:23.918 [2024-07-15 17:22:19.613233] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:23.918 [2024-07-15 17:22:19.624837] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:03:23.918 [2024-07-15 17:22:19.624874] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:23.918 [2024-07-15 17:22:19.636318] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:03:23.918 [2024-07-15 17:22:19.636376] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:23.918 [2024-07-15 17:22:19.647892] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:03:23.918 [2024-07-15 17:22:19.647940] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:23.918 passed 00:03:23.918 Test: blob_create_snapshot_power_failure ...[2024-07-15 17:22:19.682503] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:23.918 [2024-07-15 17:22:19.693956] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:23.918 [2024-07-15 17:22:19.716467] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:23.918 [2024-07-15 17:22:19.727924] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:03:24.177 passed 00:03:24.177 Test: blob_io_unit ...passed 00:03:24.177 Test: blob_io_unit_compatibility ...passed 00:03:24.177 Test: blob_ext_md_pages ...passed 00:03:24.177 Test: blob_esnap_io_4096_4096 ...passed 00:03:24.177 Test: blob_esnap_io_512_512 ...passed 00:03:24.177 Test: blob_esnap_io_4096_512 ...passed 00:03:24.177 Test: blob_esnap_io_512_4096 ...passed 00:03:24.177 Test: blob_esnap_clone_resize ...passed 00:03:24.177 Suite: blob_bs_copy_extent 00:03:24.177 Test: blob_open ...passed 00:03:24.178 Test: blob_create ...[2024-07-15 17:22:19.976739] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:03:24.178 passed 00:03:24.436 Test: blob_create_loop ...passed 00:03:24.436 Test: blob_create_fail ...[2024-07-15 17:22:20.059819] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:24.436 passed 00:03:24.436 Test: blob_create_internal ...passed 00:03:24.436 Test: blob_create_zero_extent ...passed 00:03:24.436 Test: blob_snapshot ...passed 00:03:24.436 Test: blob_clone ...passed 00:03:24.436 Test: blob_inflate ...[2024-07-15 17:22:20.233028] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:03:24.436 passed 00:03:24.794 Test: blob_delete ...passed 00:03:24.794 Test: blob_resize_test ...[2024-07-15 17:22:20.298701] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:03:24.794 passed 00:03:24.794 Test: blob_resize_thin_test ...passed 00:03:24.794 Test: channel_ops ...passed 00:03:24.794 Test: blob_super ...passed 00:03:24.794 Test: blob_rw_verify_iov ...passed 00:03:24.794 Test: blob_unmap ...passed 00:03:24.794 Test: blob_iter ...passed 00:03:24.794 Test: blob_parse_md ...passed 00:03:24.794 Test: bs_load_pending_removal ...passed 00:03:25.053 Test: bs_unload ...[2024-07-15 17:22:20.606116] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:25.053 passed 00:03:25.053 Test: bs_usable_clusters ...passed 00:03:25.053 Test: blob_crc ...[2024-07-15 17:22:20.674483] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:25.053 [2024-07-15 17:22:20.674548] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:25.053 passed 00:03:25.053 Test: blob_flags ...passed 00:03:25.053 Test: bs_version ...passed 00:03:25.053 Test: blob_set_xattrs_test ...[2024-07-15 17:22:20.775885] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:25.053 [2024-07-15 17:22:20.775945] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:25.053 passed 00:03:25.053 Test: blob_thin_prov_alloc ...passed 00:03:25.053 Test: blob_insert_cluster_msg_test ...passed 00:03:25.312 Test: blob_thin_prov_rw ...passed 00:03:25.312 Test: blob_thin_prov_rle ...passed 00:03:25.312 Test: blob_thin_prov_rw_iov ...passed 00:03:25.312 Test: blob_snapshot_rw ...passed 00:03:25.312 Test: blob_snapshot_rw_iov ...passed 00:03:25.312 Test: blob_inflate_rw ...passed 00:03:25.312 Test: blob_snapshot_freeze_io ...passed 00:03:25.570 Test: blob_operation_split_rw ...passed 00:03:25.570 Test: blob_operation_split_rw_iov ...passed 00:03:25.570 Test: blob_simultaneous_operations ...[2024-07-15 17:22:21.278987] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:25.570 [2024-07-15 17:22:21.279053] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:25.570 [2024-07-15 17:22:21.279342] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:25.570 [2024-07-15 17:22:21.279353] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:25.570 [2024-07-15 17:22:21.281706] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:25.570 [2024-07-15 17:22:21.281724] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:25.570 [2024-07-15 17:22:21.281741] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:25.570 [2024-07-15 17:22:21.281749] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:25.570 passed 00:03:25.570 Test: blob_persist_test ...passed 00:03:25.570 Test: blob_decouple_snapshot ...passed 00:03:25.831 Test: blob_seek_io_unit ...passed 00:03:25.831 Test: blob_nested_freezes ...passed 00:03:25.831 Test: blob_clone_resize ...passed 00:03:25.831 Test: blob_shallow_copy ...[2024-07-15 17:22:21.501243] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:03:25.831 [2024-07-15 17:22:21.501313] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:03:25.831 [2024-07-15 17:22:21.501325] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:03:25.831 passed 00:03:25.831 Suite: blob_blob_copy_extent 00:03:25.831 Test: blob_write ...passed 00:03:25.831 Test: blob_read ...passed 00:03:25.831 Test: blob_rw_verify ...passed 00:03:26.090 Test: blob_rw_verify_iov_nomem ...passed 00:03:26.090 Test: blob_rw_iov_read_only ...passed 00:03:26.090 Test: blob_xattr ...passed 00:03:26.090 Test: blob_dirty_shutdown ...passed 00:03:26.090 Test: blob_is_degraded ...passed 00:03:26.090 Suite: blob_esnap_bs_copy_extent 00:03:26.090 Test: blob_esnap_create ...passed 00:03:26.090 Test: blob_esnap_thread_add_remove ...passed 00:03:26.090 Test: blob_esnap_clone_snapshot ...passed 00:03:26.349 Test: blob_esnap_clone_inflate ...passed 00:03:26.349 Test: blob_esnap_clone_decouple ...passed 00:03:26.349 Test: blob_esnap_clone_reload ...passed 00:03:26.349 Test: blob_esnap_hotplug ...passed 00:03:26.349 Test: blob_set_parent ...[2024-07-15 17:22:22.076814] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:03:26.349 [2024-07-15 17:22:22.076875] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:03:26.349 [2024-07-15 17:22:22.077054] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:03:26.349 [2024-07-15 17:22:22.077069] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:03:26.349 [2024-07-15 17:22:22.077125] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:26.349 passed 00:03:26.349 Test: blob_set_external_parent ...[2024-07-15 17:22:22.111894] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:03:26.349 [2024-07-15 17:22:22.111949] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:03:26.349 [2024-07-15 17:22:22.111960] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:03:26.349 [2024-07-15 17:22:22.112009] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:26.349 passed 00:03:26.349 00:03:26.349 Run Summary: Type Total Ran Passed Failed Inactive 00:03:26.349 suites 16 16 n/a 0 0 00:03:26.349 tests 376 376 376 0 0 00:03:26.349 asserts 143965 143965 143965 0 n/a 00:03:26.349 00:03:26.349 Elapsed time = 12.078 seconds 00:03:26.349 17:22:22 unittest.unittest_blob_blobfs -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:03:26.349 00:03:26.349 00:03:26.349 CUnit - A unit testing framework for C - Version 2.1-3 00:03:26.349 http://cunit.sourceforge.net/ 00:03:26.349 00:03:26.349 00:03:26.349 Suite: blob_bdev 00:03:26.349 Test: create_bs_dev ...passed 00:03:26.349 Test: create_bs_dev_ro ...passed 00:03:26.349 Test: create_bs_dev_rw ...passed 00:03:26.349 Test: claim_bs_dev ...passed 00:03:26.349 Test: claim_bs_dev_ro ...passed 00:03:26.349 Test: deferred_destroy_refs ...passed 00:03:26.349 Test: deferred_destroy_channels ...passed 00:03:26.349 Test: deferred_destroy_threads ...passed 00:03:26.349 00:03:26.349 [2024-07-15 17:22:22.133209] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 529:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:03:26.349 [2024-07-15 17:22:22.133390] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:03:26.349 Run Summary: Type Total Ran Passed Failed Inactive 00:03:26.349 suites 1 1 n/a 0 0 00:03:26.349 tests 8 8 8 0 0 00:03:26.349 asserts 119 119 119 0 n/a 00:03:26.349 00:03:26.349 Elapsed time = 0.000 seconds 00:03:26.349 17:22:22 unittest.unittest_blob_blobfs -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:03:26.349 00:03:26.349 00:03:26.349 CUnit - A unit testing framework for C - Version 2.1-3 00:03:26.349 http://cunit.sourceforge.net/ 00:03:26.349 00:03:26.349 00:03:26.349 Suite: tree 00:03:26.349 Test: blobfs_tree_op_test ...passed 00:03:26.349 00:03:26.349 Run Summary: Type Total Ran Passed Failed Inactive 00:03:26.349 suites 1 1 n/a 0 0 00:03:26.349 tests 1 1 1 0 0 00:03:26.349 asserts 27 27 27 0 n/a 00:03:26.349 00:03:26.349 Elapsed time = 0.000 seconds 00:03:26.349 17:22:22 unittest.unittest_blob_blobfs -- unit/unittest.sh@44 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:03:26.349 00:03:26.349 00:03:26.349 CUnit - A unit testing framework for C - Version 2.1-3 00:03:26.349 http://cunit.sourceforge.net/ 00:03:26.349 00:03:26.349 00:03:26.349 Suite: blobfs_async_ut 00:03:26.608 Test: fs_init ...passed 00:03:26.608 Test: fs_open ...passed 00:03:26.608 Test: fs_create ...passed 00:03:26.608 Test: fs_truncate ...passed 00:03:26.608 Test: fs_rename ...[2024-07-15 17:22:22.227601] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:03:26.608 passed 00:03:26.608 Test: fs_rw_async ...passed 00:03:26.608 Test: fs_writev_readv_async ...passed 00:03:26.608 Test: tree_find_buffer_ut ...passed 00:03:26.608 Test: channel_ops ...passed 00:03:26.608 Test: channel_ops_sync ...passed 00:03:26.608 00:03:26.608 Run Summary: Type Total Ran Passed Failed Inactive 00:03:26.608 suites 1 1 n/a 0 0 00:03:26.608 tests 10 10 10 0 0 00:03:26.608 asserts 292 292 292 0 n/a 00:03:26.608 00:03:26.608 Elapsed time = 0.133 seconds 00:03:26.608 17:22:22 unittest.unittest_blob_blobfs -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:03:26.608 00:03:26.608 00:03:26.608 CUnit - A unit testing framework for C - Version 2.1-3 00:03:26.608 http://cunit.sourceforge.net/ 00:03:26.608 00:03:26.608 00:03:26.608 Suite: blobfs_sync_ut 00:03:26.608 Test: cache_read_after_write ...[2024-07-15 17:22:22.323435] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:03:26.608 passed 00:03:26.608 Test: file_length ...passed 00:03:26.608 Test: append_write_to_extend_blob ...passed 00:03:26.608 Test: partial_buffer ...passed 00:03:26.608 Test: cache_write_null_buffer ...passed 00:03:26.608 Test: fs_create_sync ...passed 00:03:26.608 Test: fs_rename_sync ...passed 00:03:26.608 Test: cache_append_no_cache ...passed 00:03:26.608 Test: fs_delete_file_without_close ...passed 00:03:26.608 00:03:26.608 Run Summary: Type Total Ran Passed Failed Inactive 00:03:26.608 suites 1 1 n/a 0 0 00:03:26.608 tests 9 9 9 0 0 00:03:26.608 asserts 345 345 345 0 n/a 00:03:26.608 00:03:26.608 Elapsed time = 0.266 seconds 00:03:26.608 17:22:22 unittest.unittest_blob_blobfs -- unit/unittest.sh@47 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:03:26.608 00:03:26.608 00:03:26.608 CUnit - A unit testing framework for C - Version 2.1-3 00:03:26.608 http://cunit.sourceforge.net/ 00:03:26.608 00:03:26.608 00:03:26.608 Suite: blobfs_bdev_ut 00:03:26.608 Test: spdk_blobfs_bdev_detect_test ...passed 00:03:26.608 Test: spdk_blobfs_bdev_create_test ...passed 00:03:26.608 Test: spdk_blobfs_bdev_mount_test ...passed 00:03:26.608 00:03:26.608 Run Summary: Type Total Ran Passed Failed Inactive 00:03:26.608 suites 1 1 n/a 0 0 00:03:26.608 tests 3 3 3 0 0 00:03:26.608 asserts 9 9 9 0 n/a 00:03:26.608 00:03:26.608 Elapsed time = 0.000 seconds 00:03:26.608 [2024-07-15 17:22:22.434488] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:03:26.608 [2024-07-15 17:22:22.434730] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:03:26.608 00:03:26.608 real 0m12.407s 00:03:26.608 user 0m12.378s 00:03:26.608 sys 0m0.165s 00:03:26.608 17:22:22 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.608 17:22:22 unittest.unittest_blob_blobfs -- common/autotest_common.sh@10 -- # set +x 00:03:26.608 ************************************ 00:03:26.608 END TEST unittest_blob_blobfs 00:03:26.608 ************************************ 00:03:26.869 17:22:22 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:26.869 17:22:22 unittest -- unit/unittest.sh@234 -- # run_test unittest_event unittest_event 00:03:26.869 17:22:22 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:26.869 17:22:22 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.869 17:22:22 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:26.869 ************************************ 00:03:26.869 START TEST unittest_event 00:03:26.869 ************************************ 00:03:26.869 17:22:22 unittest.unittest_event -- common/autotest_common.sh@1123 -- # unittest_event 00:03:26.869 17:22:22 unittest.unittest_event -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:03:26.869 00:03:26.869 00:03:26.869 CUnit - A unit testing framework for C - Version 2.1-3 00:03:26.869 http://cunit.sourceforge.net/ 00:03:26.869 00:03:26.869 00:03:26.869 Suite: app_suite 00:03:26.869 Test: test_spdk_app_parse_args ...app_ut [options] 00:03:26.869 00:03:26.869 CPU options: 00:03:26.869 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:03:26.869 (like [0,1,10]) 00:03:26.869 --lcores lcore to CPU mapping list. The list is in the format: 00:03:26.869 [<,lcores[@CPUs]>...] 00:03:26.869 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:03:26.869 Within the group, '-' is used for range separator, 00:03:26.869 ',' is used for single number separator. 00:03:26.869 '( )' can be omitted for single element group, 00:03:26.869 '@' can be omitted if cpus and lcores have the same value 00:03:26.869 --disable-cpumask-locks Disable CPU core lock files. 00:03:26.869 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:03:26.869 pollers in the app support interrupt mode) 00:03:26.869 -p, --main-core main (primary) core for DPDK 00:03:26.869 00:03:26.869 Configuration options: 00:03:26.869 -c, --config, --json JSON config file 00:03:26.869 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:03:26.869 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:03:26.869 --wait-for-rpc wait for RPCs to initialize subsystems 00:03:26.869 --rpcs-allowed comma-separated list of permitted RPCS 00:03:26.869 --json-ignore-init-errors don't exit on invalid config entry 00:03:26.869 00:03:26.869 Memory options: 00:03:26.869 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:03:26.869 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:03:26.869 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:03:26.869 -R, --huge-unlink unlink huge files after initialization 00:03:26.869 -n, --mem-channels number of memory channels used for DPDK 00:03:26.869 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:03:26.869 --msg-mempool-size global message memory pool size in count (default: 262143) 00:03:26.869 --no-huge run without using hugepages 00:03:26.869 -i, --shm-id shared memory ID (optional) 00:03:26.869 -g, --single-file-segments force creating just one hugetlbfs file 00:03:26.869 00:03:26.869 PCI options: 00:03:26.869 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:03:26.869 -B, --pci-blocked pci addr to block (can be used more than once) 00:03:26.869 -u, --no-pci disable PCI access 00:03:26.869 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:03:26.869 00:03:26.869 Log options: 00:03:26.869 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:03:26.869 --silence-noticelog disable notice level logging to stderr 00:03:26.869 00:03:26.869 Trace options: 00:03:26.869 --num-trace-entries number of trace entries for each core, must be power of 2, 00:03:26.869 setting 0 to disable trace (default 32768) 00:03:26.869 Tracepoints vary in size and can use more than one trace entry. 00:03:26.869 -e, --tpoint-group [:] 00:03:26.869 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:03:26.869 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:03:26.869 a tracepoint group. First tpoint inside a group can be enabled by 00:03:26.869 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:03:26.869 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:03:26.869 in /include/spdk_internal/trace_defs.h 00:03:26.869 00:03:26.869 Other options: 00:03:26.869 -h, --help show this usage 00:03:26.869 -v, --version print SPDK version 00:03:26.869 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:03:26.869 --env-context Opaque context for use of the env implementation 00:03:26.869 app_ut [options] 00:03:26.869 00:03:26.869 CPU options: 00:03:26.869 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:03:26.869 (like [0,1,10]) 00:03:26.869 --lcores lcore to CPU mapping list. The list is in the format: 00:03:26.869 [<,lcores[@CPUs]>...] 00:03:26.869 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:03:26.869 Within the group, '-' is used for range separator, 00:03:26.869 ',' is used for single number separator. 00:03:26.869 '( )' can be omitted for single element group, 00:03:26.869 '@' can be omitted if cpus and lcores have the same value 00:03:26.869 --disable-cpumask-locks Disable CPU core lock files. 00:03:26.870 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:03:26.870 pollers in the app support interrupt mode) 00:03:26.870 -p, --main-core main (primary) core for DPDK 00:03:26.870 00:03:26.870 Configuration options: 00:03:26.870 -c, --config, --json JSON config file 00:03:26.870 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:03:26.870 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:03:26.870 --wait-for-rpc wait for RPCs to initialize subsystems 00:03:26.870 --rpcs-allowed comma-separated list of permitted RPCS 00:03:26.870 --json-ignore-init-errors don't exit on invalid config entry 00:03:26.870 00:03:26.870 Memory options: 00:03:26.870 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:03:26.870 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:03:26.870 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:03:26.870 -R, --huge-unlink unlink huge files after initialization 00:03:26.870 -n, --mem-channels number of memory channels used for DPDK 00:03:26.870 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:03:26.870 --msg-mempool-size global message memory pool size in count (default: 262143) 00:03:26.870 --no-huge run without using hugepages 00:03:26.870 -i, --shm-id shared memory ID (optional) 00:03:26.870 -g, --single-file-segments force creating just one hugetlbfs file 00:03:26.870 00:03:26.870 PCI options: 00:03:26.870 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:03:26.870 -B, --pci-blocked pci addr to block (can be used more than once) 00:03:26.870 -u, --no-pci disable PCI access 00:03:26.870 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:03:26.870 00:03:26.870 Log options: 00:03:26.870 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:03:26.870 --silence-noticelog disable notice level logging to stderr 00:03:26.870 00:03:26.870 Trace options: 00:03:26.870 --num-trace-entries number of trace entries for each core, must be power of 2, 00:03:26.870 setting 0 to disable trace (default 32768) 00:03:26.870 Tracepoints vary in size and can use more than one trace entry. 00:03:26.870 -e, --tpoint-group [:] 00:03:26.870 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:03:26.870 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:03:26.870 a tracepoint group. First tpoint inside a group can be enabled by 00:03:26.870 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:03:26.870 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:03:26.870 in /include/spdk_internal/trace_defs.h 00:03:26.870 00:03:26.870 Other options: 00:03:26.870 -h, --help show this usage 00:03:26.870 -v, --version print SPDK version 00:03:26.870 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:03:26.870 --env-context Opaque context for use of the env implementation 00:03:26.870 app_ut [options] 00:03:26.870 00:03:26.870 CPU options: 00:03:26.870 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:03:26.870 (like [0,1,10]) 00:03:26.870 --lcores lcore to CPU mapping list. The list is in the format: 00:03:26.870 [<,lcores[@CPUs]>...] 00:03:26.870 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:03:26.870 Within the group, '-' is used for range separator, 00:03:26.870 ',' is used for single number separator. 00:03:26.870 '( )' can be omitted for single element group, 00:03:26.870 '@' can be omitted if cpus and lcores have the same value 00:03:26.870 --disable-cpumask-locks Disable CPU core lock files. 00:03:26.870 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:03:26.870 pollers in the app support interrupt mode) 00:03:26.870 -p, --main-core main (primary) core for DPDK 00:03:26.870 00:03:26.870 Configuration options: 00:03:26.870 -c, --config, --json JSON config file 00:03:26.870 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:03:26.870 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:03:26.870 --wait-for-rpc wait for RPCs to initialize subsystems 00:03:26.870 --rpcs-allowed comma-separated list of permitted RPCS 00:03:26.870 --json-ignore-init-errors don't exit on invalid config entry 00:03:26.870 00:03:26.870 Memory options: 00:03:26.870 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:03:26.870 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:03:26.870 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:03:26.870 -R, --huge-unlink unlink huge files after initialization 00:03:26.870 -n, --mem-channels number of memory channels used for DPDK 00:03:26.870 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:03:26.870 --msg-mempool-size global message memory pool size in count (default: 262143) 00:03:26.870 --no-huge run without using hugepages 00:03:26.870 -i, --shm-id shared memory ID (optional) 00:03:26.870 -g, --single-file-segments force creating just one hugetlbfs file 00:03:26.870 00:03:26.870 PCI options: 00:03:26.870 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:03:26.870 -B, --pci-blocked pci addr to block (can be used more than once) 00:03:26.870 -u, --no-pci disable PCI access 00:03:26.870 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:03:26.870 00:03:26.870 Log options: 00:03:26.870 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:03:26.870 --silence-noticelog disable notice level logging to stderr 00:03:26.870 00:03:26.870 Trace options: 00:03:26.870 --num-trace-entries number of trace entries for each core, must be power of 2, 00:03:26.870 setting 0 to disable trace (default 32768) 00:03:26.870 Tracepoints vary in size and can use more than one trace entry. 00:03:26.870 -e, --tpoint-group [:] 00:03:26.870 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:03:26.870 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:03:26.870 a tracepoint group. First tpoint inside a group can be enabled by 00:03:26.870 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:03:26.870 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:03:26.870 in /include/spdk_internal/trace_defs.h 00:03:26.870 00:03:26.870 Other options: 00:03:26.870 -h, --help show this usage 00:03:26.870 -v, --version print SPDK version 00:03:26.870 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:03:26.870 --env-context Opaque context for use of the env implementation 00:03:26.870 passed 00:03:26.870 00:03:26.870 Run Summary: Type Total Ran Passed Failed Inactive 00:03:26.870 suites 1 1 n/a 0 0 00:03:26.870 tests 1 1 1 0 0 00:03:26.870 asserts 8 8 8 0 n/a 00:03:26.870 00:03:26.870 Elapsed time = 0.000 seconds 00:03:26.870 app_ut: invalid option -- z 00:03:26.870 app_ut: unrecognized option `--test-long-opt' 00:03:26.870 [2024-07-15 17:22:22.480926] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1193:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:03:26.870 [2024-07-15 17:22:22.481145] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1373:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:03:26.871 [2024-07-15 17:22:22.481244] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1278:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:03:26.871 17:22:22 unittest.unittest_event -- unit/unittest.sh@52 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:03:26.871 00:03:26.871 00:03:26.871 CUnit - A unit testing framework for C - Version 2.1-3 00:03:26.871 http://cunit.sourceforge.net/ 00:03:26.871 00:03:26.871 00:03:26.871 Suite: app_suite 00:03:26.871 Test: test_create_reactor ...passed 00:03:26.871 Test: test_init_reactors ...passed 00:03:26.871 Test: test_event_call ...passed 00:03:26.871 Test: test_schedule_thread ...passed 00:03:26.871 Test: test_reschedule_thread ...passed 00:03:26.871 Test: test_bind_thread ...passed 00:03:26.871 Test: test_for_each_reactor ...passed 00:03:26.871 Test: test_reactor_stats ...passed 00:03:26.871 Test: test_scheduler ...passed 00:03:26.871 Test: test_governor ...passed 00:03:26.871 00:03:26.871 Run Summary: Type Total Ran Passed Failed Inactive 00:03:26.871 suites 1 1 n/a 0 0 00:03:26.871 tests 10 10 10 0 0 00:03:26.871 asserts 336 336 336 0 n/a 00:03:26.871 00:03:26.871 Elapsed time = 0.000 seconds 00:03:26.871 00:03:26.871 real 0m0.012s 00:03:26.871 user 0m0.005s 00:03:26.871 sys 0m0.009s 00:03:26.871 17:22:22 unittest.unittest_event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.871 ************************************ 00:03:26.871 END TEST unittest_event 00:03:26.871 17:22:22 unittest.unittest_event -- common/autotest_common.sh@10 -- # set +x 00:03:26.871 ************************************ 00:03:26.871 17:22:22 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:26.871 17:22:22 unittest -- unit/unittest.sh@235 -- # uname -s 00:03:26.871 17:22:22 unittest -- unit/unittest.sh@235 -- # '[' FreeBSD = Linux ']' 00:03:26.871 17:22:22 unittest -- unit/unittest.sh@239 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:03:26.871 17:22:22 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:26.871 17:22:22 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.871 17:22:22 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:26.871 ************************************ 00:03:26.871 START TEST unittest_accel 00:03:26.871 ************************************ 00:03:26.871 17:22:22 unittest.unittest_accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:03:26.871 00:03:26.871 00:03:26.871 CUnit - A unit testing framework for C - Version 2.1-3 00:03:26.871 http://cunit.sourceforge.net/ 00:03:26.871 00:03:26.871 00:03:26.871 Suite: accel_sequence 00:03:26.871 Test: test_sequence_fill_copy ...passed 00:03:26.871 Test: test_sequence_abort ...passed 00:03:26.871 Test: test_sequence_append_error ...passed 00:03:26.871 Test: test_sequence_completion_error ...[2024-07-15 17:22:22.537396] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1946:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0xa774b2ce8c0 00:03:26.871 [2024-07-15 17:22:22.537695] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1946:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0xa774b2ce8c0 00:03:26.871 [2024-07-15 17:22:22.538075] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1856:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0xa774b2ce8c0 00:03:26.871 [2024-07-15 17:22:22.538121] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1856:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0xa774b2ce8c0 00:03:26.871 passed 00:03:26.871 Test: test_sequence_decompress ...passed 00:03:26.871 Test: test_sequence_reverse ...passed 00:03:26.871 Test: test_sequence_copy_elision ...passed 00:03:26.871 Test: test_sequence_accel_buffers ...passed 00:03:26.871 Test: test_sequence_memory_domain ...[2024-07-15 17:22:22.540516] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1748:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:03:26.871 [2024-07-15 17:22:22.540554] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1787:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -48 00:03:26.871 passed 00:03:26.871 Test: test_sequence_module_memory_domain ...passed 00:03:26.871 Test: test_sequence_crypto ...passed 00:03:26.871 Test: test_sequence_driver ...[2024-07-15 17:22:22.541525] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1895:accel_process_sequence: *ERROR*: Failed to execute sequence: 0xa774b2cef80 using driver: ut 00:03:26.871 [2024-07-15 17:22:22.541574] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1960:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0xa774b2cef80 through driver: ut 00:03:26.871 passed 00:03:26.871 Test: test_sequence_same_iovs ...passed 00:03:26.871 Test: test_sequence_crc32 ...passed 00:03:26.871 Suite: accel 00:03:26.871 Test: test_spdk_accel_task_complete ...passed 00:03:26.871 Test: test_get_task ...passed 00:03:26.871 Test: test_spdk_accel_submit_copy ...passed 00:03:26.871 Test: test_spdk_accel_submit_dualcast ...passed 00:03:26.871 Test: test_spdk_accel_submit_compare ...passed 00:03:26.871 Test: test_spdk_accel_submit_fill ...passed 00:03:26.871 Test: test_spdk_accel_submit_crc32c ...passed 00:03:26.871 Test: test_spdk_accel_submit_crc32cv ...[2024-07-15 17:22:22.542315] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 422:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:03:26.871 [2024-07-15 17:22:22.542331] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 422:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:03:26.871 passed 00:03:26.871 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:03:26.871 Test: test_spdk_accel_submit_xor ...passed 00:03:26.871 Test: test_spdk_accel_module_find_by_name ...passed 00:03:26.871 Test: test_spdk_accel_module_register ...passed 00:03:26.871 00:03:26.871 Run Summary: Type Total Ran Passed Failed Inactive 00:03:26.871 suites 2 2 n/a 0 0 00:03:26.871 tests 26 26 26 0 0 00:03:26.871 asserts 830 830 830 0 n/a 00:03:26.871 00:03:26.871 Elapsed time = 0.008 seconds 00:03:26.871 00:03:26.871 real 0m0.015s 00:03:26.871 user 0m0.014s 00:03:26.871 sys 0m0.000s 00:03:26.871 17:22:22 unittest.unittest_accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.871 ************************************ 00:03:26.871 END TEST unittest_accel 00:03:26.871 ************************************ 00:03:26.871 17:22:22 unittest.unittest_accel -- common/autotest_common.sh@10 -- # set +x 00:03:26.871 17:22:22 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:26.871 17:22:22 unittest -- unit/unittest.sh@240 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:03:26.871 17:22:22 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:26.871 17:22:22 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.871 17:22:22 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:26.871 ************************************ 00:03:26.871 START TEST unittest_ioat 00:03:26.871 ************************************ 00:03:26.871 17:22:22 unittest.unittest_ioat -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:03:26.871 00:03:26.871 00:03:26.871 CUnit - A unit testing framework for C - Version 2.1-3 00:03:26.871 http://cunit.sourceforge.net/ 00:03:26.871 00:03:26.871 00:03:26.871 Suite: ioat 00:03:26.871 Test: ioat_state_check ...passed 00:03:26.871 00:03:26.871 Run Summary: Type Total Ran Passed Failed Inactive 00:03:26.871 suites 1 1 n/a 0 0 00:03:26.871 tests 1 1 1 0 0 00:03:26.871 asserts 32 32 32 0 n/a 00:03:26.871 00:03:26.871 Elapsed time = 0.000 seconds 00:03:26.871 00:03:26.871 real 0m0.006s 00:03:26.871 user 0m0.005s 00:03:26.871 sys 0m0.005s 00:03:26.871 17:22:22 unittest.unittest_ioat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.871 ************************************ 00:03:26.872 END TEST unittest_ioat 00:03:26.872 17:22:22 unittest.unittest_ioat -- common/autotest_common.sh@10 -- # set +x 00:03:26.872 ************************************ 00:03:26.872 17:22:22 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:26.872 17:22:22 unittest -- unit/unittest.sh@241 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:26.872 17:22:22 unittest -- unit/unittest.sh@242 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:03:26.872 17:22:22 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:26.872 17:22:22 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.872 17:22:22 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:26.872 ************************************ 00:03:26.872 START TEST unittest_idxd_user 00:03:26.872 ************************************ 00:03:26.872 17:22:22 unittest.unittest_idxd_user -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:03:26.872 00:03:26.872 00:03:26.872 CUnit - A unit testing framework for C - Version 2.1-3 00:03:26.872 http://cunit.sourceforge.net/ 00:03:26.872 00:03:26.872 00:03:26.872 Suite: idxd_user 00:03:26.872 Test: test_idxd_wait_cmd ...[2024-07-15 17:22:22.633399] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:03:26.872 passed 00:03:26.872 Test: test_idxd_reset_dev ...[2024-07-15 17:22:22.633643] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:03:26.872 passed 00:03:26.872 Test: test_idxd_group_config ...passed 00:03:26.872 Test: test_idxd_wq_config ...passed 00:03:26.872 00:03:26.872 Run Summary: Type Total Ran Passed Failed Inactive 00:03:26.872 suites 1 1 n/a 0 0 00:03:26.872 tests 4 4 4 0 0 00:03:26.872 asserts 20 20 20 0 n/a 00:03:26.872 00:03:26.872 Elapsed time = 0.000 seconds[2024-07-15 17:22:22.633680] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:03:26.872 [2024-07-15 17:22:22.633697] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:03:26.872 00:03:26.872 00:03:26.872 real 0m0.006s 00:03:26.872 user 0m0.000s 00:03:26.872 sys 0m0.005s 00:03:26.872 17:22:22 unittest.unittest_idxd_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.872 17:22:22 unittest.unittest_idxd_user -- common/autotest_common.sh@10 -- # set +x 00:03:26.872 ************************************ 00:03:26.872 END TEST unittest_idxd_user 00:03:26.872 ************************************ 00:03:26.872 17:22:22 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:26.872 17:22:22 unittest -- unit/unittest.sh@244 -- # run_test unittest_iscsi unittest_iscsi 00:03:26.872 17:22:22 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:26.872 17:22:22 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.872 17:22:22 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:26.872 ************************************ 00:03:26.872 START TEST unittest_iscsi 00:03:26.872 ************************************ 00:03:26.872 17:22:22 unittest.unittest_iscsi -- common/autotest_common.sh@1123 -- # unittest_iscsi 00:03:26.872 17:22:22 unittest.unittest_iscsi -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:03:26.872 00:03:26.872 00:03:26.872 CUnit - A unit testing framework for C - Version 2.1-3 00:03:26.872 http://cunit.sourceforge.net/ 00:03:26.872 00:03:26.872 00:03:26.872 Suite: conn_suite 00:03:26.872 Test: read_task_split_in_order_case ...passed 00:03:26.872 Test: read_task_split_reverse_order_case ...passed 00:03:26.872 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:03:26.872 Test: process_non_read_task_completion_test ...passed 00:03:26.872 Test: free_tasks_on_connection ...passed 00:03:26.872 Test: free_tasks_with_queued_datain ...passed 00:03:26.872 Test: abort_queued_datain_task_test ...passed 00:03:26.872 Test: abort_queued_datain_tasks_test ...passed 00:03:26.872 00:03:26.872 Run Summary: Type Total Ran Passed Failed Inactive 00:03:26.872 suites 1 1 n/a 0 0 00:03:26.872 tests 8 8 8 0 0 00:03:26.872 asserts 230 230 230 0 n/a 00:03:26.872 00:03:26.872 Elapsed time = 0.000 seconds 00:03:26.872 17:22:22 unittest.unittest_iscsi -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:03:26.872 00:03:26.872 00:03:26.872 CUnit - A unit testing framework for C - Version 2.1-3 00:03:26.872 http://cunit.sourceforge.net/ 00:03:26.872 00:03:26.872 00:03:26.872 Suite: iscsi_suite 00:03:26.872 Test: param_negotiation_test ...passed 00:03:26.872 Test: list_negotiation_test ...passed 00:03:26.872 Test: parse_valid_test ...passed 00:03:26.872 Test: parse_invalid_test ...[2024-07-15 17:22:22.683839] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:03:26.872 [2024-07-15 17:22:22.684030] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:03:26.872 [2024-07-15 17:22:22.684048] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key 00:03:26.872 [2024-07-15 17:22:22.684074] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:03:26.872 [2024-07-15 17:22:22.684092] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256 00:03:26.872 [2024-07-15 17:22:22.684105] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:03:26.872 [2024-07-15 17:22:22.684117] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B 00:03:26.872 passed 00:03:26.872 00:03:26.872 Run Summary: Type Total Ran Passed Failed Inactive 00:03:26.872 suites 1 1 n/a 0 0 00:03:26.872 tests 4 4 4 0 0 00:03:26.872 asserts 161 161 161 0 n/a 00:03:26.872 00:03:26.872 Elapsed time = 0.000 seconds 00:03:26.872 17:22:22 unittest.unittest_iscsi -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:03:26.872 00:03:26.872 00:03:26.872 CUnit - A unit testing framework for C - Version 2.1-3 00:03:26.872 http://cunit.sourceforge.net/ 00:03:26.872 00:03:26.872 00:03:26.872 Suite: iscsi_target_node_suite 00:03:26.872 Test: add_lun_test_cases ...[2024-07-15 17:22:22.689356] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1253:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:03:26.872 [2024-07-15 17:22:22.689565] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1258:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:03:26.872 passed 00:03:26.872 Test: allow_any_allowed ...passed 00:03:26.872 Test: allow_ipv6_allowed ...passed 00:03:26.872 Test: allow_ipv6_denied ...passed 00:03:26.872 Test: allow_ipv6_invalid ...passed 00:03:26.872 Test: allow_ipv4_allowed ...passed 00:03:26.872 Test: allow_ipv4_denied ...passed 00:03:26.872 Test: allow_ipv4_invalid ...passed 00:03:26.872 Test: node_access_allowed ...passed 00:03:26.872 Test: node_access_denied_by_empty_netmask ...[2024-07-15 17:22:22.689584] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:03:26.872 [2024-07-15 17:22:22.689597] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:03:26.872 [2024-07-15 17:22:22.689617] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1270:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:03:26.872 passed 00:03:26.872 Test: node_access_multi_initiator_groups_cases ...passed 00:03:26.872 Test: allow_iscsi_name_multi_maps_case ...passed 00:03:26.873 Test: chap_param_test_cases ...passed[2024-07-15 17:22:22.689731] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:03:26.873 [2024-07-15 17:22:22.689750] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:03:26.873 [2024-07-15 17:22:22.689762] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:03:26.873 [2024-07-15 17:22:22.689775] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:03:26.873 [2024-07-15 17:22:22.689787] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1030:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:03:26.873 00:03:26.873 00:03:26.873 Run Summary: Type Total Ran Passed Failed Inactive 00:03:26.873 suites 1 1 n/a 0 0 00:03:26.873 tests 13 13 13 0 0 00:03:26.873 asserts 50 50 50 0 n/a 00:03:26.873 00:03:26.873 Elapsed time = 0.000 seconds 00:03:26.873 17:22:22 unittest.unittest_iscsi -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:03:26.873 00:03:26.873 00:03:26.873 CUnit - A unit testing framework for C - Version 2.1-3 00:03:26.873 http://cunit.sourceforge.net/ 00:03:26.873 00:03:26.873 00:03:26.873 Suite: iscsi_suite 00:03:26.873 Test: op_login_check_target_test ...passed 00:03:26.873 Test: op_login_session_normal_test ...[2024-07-15 17:22:22.695027] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1439:iscsi_op_login_check_target: *ERROR*: access denied 00:03:26.873 [2024-07-15 17:22:22.695236] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:03:26.873 [2024-07-15 17:22:22.695252] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:03:26.873 passed 00:03:26.873 Test: maxburstlength_test ...[2024-07-15 17:22:22.695263] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:03:26.873 [2024-07-15 17:22:22.695295] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:03:26.873 [2024-07-15 17:22:22.695309] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1475:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:03:26.873 [2024-07-15 17:22:22.695336] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 703:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:03:26.873 [2024-07-15 17:22:22.695347] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1475:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:03:26.873 [2024-07-15 17:22:22.695402] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:03:26.873 [2024-07-15 17:22:22.695415] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4569:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:03:26.873 passed 00:03:26.873 Test: underflow_for_read_transfer_test ...passed 00:03:26.873 Test: underflow_for_zero_read_transfer_test ...passed 00:03:26.873 Test: underflow_for_request_sense_test ...passed 00:03:26.873 Test: underflow_for_check_condition_test ...passed 00:03:26.873 Test: add_transfer_task_test ...passed 00:03:26.873 Test: get_transfer_task_test ...passed 00:03:26.873 Test: del_transfer_task_test ...passed 00:03:26.873 Test: clear_all_transfer_tasks_test ...passed 00:03:26.873 Test: build_iovs_test ...passed 00:03:26.873 Test: build_iovs_with_md_test ...passed 00:03:26.873 Test: pdu_hdr_op_login_test ...passed 00:03:26.873 Test: pdu_hdr_op_text_test ...passed 00:03:26.873 Test: pdu_hdr_op_logout_test ...passed 00:03:26.873 Test: pdu_hdr_op_scsi_test ...[2024-07-15 17:22:22.695564] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1256:iscsi_op_login_rsp_init: *ERROR*: transit error 00:03:26.873 [2024-07-15 17:22:22.695579] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1264:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:03:26.873 [2024-07-15 17:22:22.695591] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1277:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:03:26.873 [2024-07-15 17:22:22.695606] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2259:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:03:26.873 [2024-07-15 17:22:22.695618] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2290:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:03:26.873 [2024-07-15 17:22:22.695629] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2304:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:03:26.873 [2024-07-15 17:22:22.695643] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2535:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:03:26.873 [2024-07-15 17:22:22.695659] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:03:26.873 [2024-07-15 17:22:22.695673] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:03:26.873 [2024-07-15 17:22:22.695683] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3382:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:03:26.873 [2024-07-15 17:22:22.695699] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3416:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:03:26.873 [2024-07-15 17:22:22.695711] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3423:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:03:26.873 [2024-07-15 17:22:22.695723] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3446:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:03:26.873 passed 00:03:26.873 Test: pdu_hdr_op_task_mgmt_test ...passed 00:03:26.873 Test: pdu_hdr_op_nopout_test ...passed 00:03:26.873 Test: pdu_hdr_op_data_test ...passed 00:03:26.873 Test: empty_text_with_cbit_test ...passed 00:03:26.873 Test: pdu_payload_read_test ...[2024-07-15 17:22:22.695737] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3623:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:03:26.873 [2024-07-15 17:22:22.695749] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3712:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:03:26.873 [2024-07-15 17:22:22.695765] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3731:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:03:26.873 [2024-07-15 17:22:22.695777] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:03:26.873 [2024-07-15 17:22:22.695787] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:03:26.873 [2024-07-15 17:22:22.695796] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3761:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:03:26.873 [2024-07-15 17:22:22.695809] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4204:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:03:26.873 [2024-07-15 17:22:22.695820] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:03:26.873 [2024-07-15 17:22:22.695831] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:03:26.873 [2024-07-15 17:22:22.695842] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4235:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:03:26.873 [2024-07-15 17:22:22.695853] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4240:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:03:26.873 [2024-07-15 17:22:22.695863] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4251:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:03:26.873 [2024-07-15 17:22:22.695874] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4263:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:03:26.873 [2024-07-15 17:22:22.696281] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4650:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:03:26.873 passed 00:03:26.873 Test: data_out_pdu_sequence_test ...passed 00:03:26.873 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:03:26.873 00:03:26.873 Run Summary: Type Total Ran Passed Failed Inactive 00:03:26.873 suites 1 1 n/a 0 0 00:03:26.873 tests 24 24 24 0 0 00:03:26.873 asserts 150253 150253 150253 0 n/a 00:03:26.873 00:03:26.873 Elapsed time = 0.000 seconds 00:03:27.133 17:22:22 unittest.unittest_iscsi -- unit/unittest.sh@72 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:03:27.133 00:03:27.133 00:03:27.133 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.133 http://cunit.sourceforge.net/ 00:03:27.133 00:03:27.133 00:03:27.133 Suite: init_grp_suite 00:03:27.133 Test: create_initiator_group_success_case ...passed 00:03:27.133 Test: find_initiator_group_success_case ...passed 00:03:27.133 Test: register_initiator_group_twice_case ...passed 00:03:27.133 Test: add_initiator_name_success_case ...passed 00:03:27.133 Test: add_initiator_name_fail_case ...[2024-07-15 17:22:22.703277] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:03:27.133 passed 00:03:27.133 Test: delete_all_initiator_names_success_case ...passed 00:03:27.133 Test: add_netmask_success_case ...passed 00:03:27.133 Test: add_netmask_fail_case ...[2024-07-15 17:22:22.703486] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:03:27.133 passed 00:03:27.133 Test: delete_all_netmasks_success_case ...passed 00:03:27.133 Test: initiator_name_overwrite_all_to_any_case ...passed 00:03:27.133 Test: netmask_overwrite_all_to_any_case ...passed 00:03:27.133 Test: add_delete_initiator_names_case ...passed 00:03:27.133 Test: add_duplicated_initiator_names_case ...passed 00:03:27.133 Test: delete_nonexisting_initiator_names_case ...passed 00:03:27.133 Test: add_delete_netmasks_case ...passed 00:03:27.133 Test: add_duplicated_netmasks_case ...passed 00:03:27.133 Test: delete_nonexisting_netmasks_case ...passed 00:03:27.133 00:03:27.133 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.133 suites 1 1 n/a 0 0 00:03:27.133 tests 17 17 17 0 0 00:03:27.133 asserts 108 108 108 0 n/a 00:03:27.133 00:03:27.133 Elapsed time = 0.000 seconds 00:03:27.133 17:22:22 unittest.unittest_iscsi -- unit/unittest.sh@73 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:03:27.133 00:03:27.133 00:03:27.133 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.133 http://cunit.sourceforge.net/ 00:03:27.133 00:03:27.133 00:03:27.133 Suite: portal_grp_suite 00:03:27.133 Test: portal_create_ipv4_normal_case ...passed 00:03:27.133 Test: portal_create_ipv6_normal_case ...passed 00:03:27.133 Test: portal_create_ipv4_wildcard_case ...passed 00:03:27.133 Test: portal_create_ipv6_wildcard_case ...passed 00:03:27.133 Test: portal_create_twice_case ...passed 00:03:27.133 Test: portal_grp_register_unregister_case ...passed 00:03:27.133 Test: portal_grp_register_twice_case ...passed 00:03:27.133 Test: portal_grp_add_delete_case ...passed[2024-07-15 17:22:22.707808] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:03:27.133 00:03:27.133 Test: portal_grp_add_delete_twice_case ...passed 00:03:27.133 00:03:27.134 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.134 suites 1 1 n/a 0 0 00:03:27.134 tests 9 9 9 0 0 00:03:27.134 asserts 44 44 44 0 n/a 00:03:27.134 00:03:27.134 Elapsed time = 0.000 seconds 00:03:27.134 00:03:27.134 real 0m0.035s 00:03:27.134 user 0m0.008s 00:03:27.134 sys 0m0.026s 00:03:27.134 17:22:22 unittest.unittest_iscsi -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:27.134 17:22:22 unittest.unittest_iscsi -- common/autotest_common.sh@10 -- # set +x 00:03:27.134 ************************************ 00:03:27.134 END TEST unittest_iscsi 00:03:27.134 ************************************ 00:03:27.134 17:22:22 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:27.134 17:22:22 unittest -- unit/unittest.sh@245 -- # run_test unittest_json unittest_json 00:03:27.134 17:22:22 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:27.134 17:22:22 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:27.134 17:22:22 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:27.134 ************************************ 00:03:27.134 START TEST unittest_json 00:03:27.134 ************************************ 00:03:27.134 17:22:22 unittest.unittest_json -- common/autotest_common.sh@1123 -- # unittest_json 00:03:27.134 17:22:22 unittest.unittest_json -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:03:27.134 00:03:27.134 00:03:27.134 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.134 http://cunit.sourceforge.net/ 00:03:27.134 00:03:27.134 00:03:27.134 Suite: json 00:03:27.134 Test: test_parse_literal ...passed 00:03:27.134 Test: test_parse_string_simple ...passed 00:03:27.134 Test: test_parse_string_control_chars ...passed 00:03:27.134 Test: test_parse_string_utf8 ...passed 00:03:27.134 Test: test_parse_string_escapes_twochar ...passed 00:03:27.134 Test: test_parse_string_escapes_unicode ...passed 00:03:27.134 Test: test_parse_number ...passed 00:03:27.134 Test: test_parse_array ...passed 00:03:27.134 Test: test_parse_object ...passed 00:03:27.134 Test: test_parse_nesting ...passed 00:03:27.134 Test: test_parse_comment ...passed 00:03:27.134 00:03:27.134 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.134 suites 1 1 n/a 0 0 00:03:27.134 tests 11 11 11 0 0 00:03:27.134 asserts 1516 1516 1516 0 n/a 00:03:27.134 00:03:27.134 Elapsed time = 0.000 seconds 00:03:27.134 17:22:22 unittest.unittest_json -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:03:27.134 00:03:27.134 00:03:27.134 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.134 http://cunit.sourceforge.net/ 00:03:27.134 00:03:27.134 00:03:27.134 Suite: json 00:03:27.134 Test: test_strequal ...passed 00:03:27.134 Test: test_num_to_uint16 ...passed 00:03:27.134 Test: test_num_to_int32 ...passed 00:03:27.134 Test: test_num_to_uint64 ...passed 00:03:27.134 Test: test_decode_object ...passed 00:03:27.134 Test: test_decode_array ...passed 00:03:27.134 Test: test_decode_bool ...passed 00:03:27.134 Test: test_decode_uint16 ...passed 00:03:27.134 Test: test_decode_int32 ...passed 00:03:27.134 Test: test_decode_uint32 ...passed 00:03:27.134 Test: test_decode_uint64 ...passed 00:03:27.134 Test: test_decode_string ...passed 00:03:27.134 Test: test_decode_uuid ...passed 00:03:27.134 Test: test_find ...passed 00:03:27.134 Test: test_find_array ...passed 00:03:27.134 Test: test_iterating ...passed 00:03:27.134 Test: test_free_object ...passed 00:03:27.134 00:03:27.134 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.134 suites 1 1 n/a 0 0 00:03:27.134 tests 17 17 17 0 0 00:03:27.134 asserts 236 236 236 0 n/a 00:03:27.134 00:03:27.134 Elapsed time = 0.000 seconds 00:03:27.134 17:22:22 unittest.unittest_json -- unit/unittest.sh@79 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:03:27.134 00:03:27.134 00:03:27.134 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.134 http://cunit.sourceforge.net/ 00:03:27.134 00:03:27.134 00:03:27.134 Suite: json 00:03:27.134 Test: test_write_literal ...passed 00:03:27.134 Test: test_write_string_simple ...passed 00:03:27.134 Test: test_write_string_escapes ...passed 00:03:27.134 Test: test_write_string_utf16le ...passed 00:03:27.134 Test: test_write_number_int32 ...passed 00:03:27.134 Test: test_write_number_uint32 ...passed 00:03:27.134 Test: test_write_number_uint128 ...passed 00:03:27.134 Test: test_write_string_number_uint128 ...passed 00:03:27.134 Test: test_write_number_int64 ...passed 00:03:27.134 Test: test_write_number_uint64 ...passed 00:03:27.134 Test: test_write_number_double ...passed 00:03:27.134 Test: test_write_uuid ...passed 00:03:27.134 Test: test_write_array ...passed 00:03:27.134 Test: test_write_object ...passed 00:03:27.134 Test: test_write_nesting ...passed 00:03:27.134 Test: test_write_val ...passed 00:03:27.134 00:03:27.134 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.134 suites 1 1 n/a 0 0 00:03:27.134 tests 16 16 16 0 0 00:03:27.134 asserts 918 918 918 0 n/a 00:03:27.134 00:03:27.134 Elapsed time = 0.000 seconds 00:03:27.134 17:22:22 unittest.unittest_json -- unit/unittest.sh@80 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:03:27.134 00:03:27.134 00:03:27.134 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.134 http://cunit.sourceforge.net/ 00:03:27.134 00:03:27.134 00:03:27.134 Suite: jsonrpc 00:03:27.134 Test: test_parse_request ...passed 00:03:27.134 Test: test_parse_request_streaming ...passed 00:03:27.134 00:03:27.134 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.134 suites 1 1 n/a 0 0 00:03:27.134 tests 2 2 2 0 0 00:03:27.134 asserts 289 289 289 0 n/a 00:03:27.134 00:03:27.134 Elapsed time = 0.000 seconds 00:03:27.134 00:03:27.134 real 0m0.028s 00:03:27.134 user 0m0.028s 00:03:27.134 sys 0m0.012s 00:03:27.134 17:22:22 unittest.unittest_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:27.134 17:22:22 unittest.unittest_json -- common/autotest_common.sh@10 -- # set +x 00:03:27.134 ************************************ 00:03:27.134 END TEST unittest_json 00:03:27.134 ************************************ 00:03:27.134 17:22:22 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:27.134 17:22:22 unittest -- unit/unittest.sh@246 -- # run_test unittest_rpc unittest_rpc 00:03:27.134 17:22:22 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:27.134 17:22:22 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:27.135 17:22:22 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:27.135 ************************************ 00:03:27.135 START TEST unittest_rpc 00:03:27.135 ************************************ 00:03:27.135 17:22:22 unittest.unittest_rpc -- common/autotest_common.sh@1123 -- # unittest_rpc 00:03:27.135 17:22:22 unittest.unittest_rpc -- unit/unittest.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:03:27.135 00:03:27.135 00:03:27.135 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.135 http://cunit.sourceforge.net/ 00:03:27.135 00:03:27.135 00:03:27.135 Suite: rpc 00:03:27.135 Test: test_jsonrpc_handler ...passed 00:03:27.135 Test: test_spdk_rpc_is_method_allowed ...passed 00:03:27.135 Test: test_rpc_get_methods ...passed 00:03:27.135 Test: test_rpc_spdk_get_version ...passed 00:03:27.135 Test: test_spdk_rpc_listen_close ...passed 00:03:27.135 Test: test_rpc_run_multiple_servers ...passed 00:03:27.135 00:03:27.135 [2024-07-15 17:22:22.824240] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:03:27.135 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.135 suites 1 1 n/a 0 0 00:03:27.135 tests 6 6 6 0 0 00:03:27.135 asserts 23 23 23 0 n/a 00:03:27.135 00:03:27.135 Elapsed time = 0.000 seconds 00:03:27.135 00:03:27.135 real 0m0.006s 00:03:27.135 user 0m0.004s 00:03:27.135 sys 0m0.004s 00:03:27.135 17:22:22 unittest.unittest_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:27.135 17:22:22 unittest.unittest_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:27.135 ************************************ 00:03:27.135 END TEST unittest_rpc 00:03:27.135 ************************************ 00:03:27.135 17:22:22 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:27.135 17:22:22 unittest -- unit/unittest.sh@247 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:03:27.135 17:22:22 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:27.135 17:22:22 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:27.135 17:22:22 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:27.135 ************************************ 00:03:27.135 START TEST unittest_notify 00:03:27.135 ************************************ 00:03:27.135 17:22:22 unittest.unittest_notify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:03:27.135 00:03:27.135 00:03:27.135 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.135 http://cunit.sourceforge.net/ 00:03:27.135 00:03:27.135 00:03:27.135 Suite: app_suite 00:03:27.135 Test: notify ...passed 00:03:27.135 00:03:27.135 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.135 suites 1 1 n/a 0 0 00:03:27.135 tests 1 1 1 0 0 00:03:27.135 asserts 13 13 13 0 n/a 00:03:27.135 00:03:27.135 Elapsed time = 0.000 seconds 00:03:27.135 00:03:27.135 real 0m0.006s 00:03:27.135 user 0m0.000s 00:03:27.135 sys 0m0.008s 00:03:27.135 17:22:22 unittest.unittest_notify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:27.135 17:22:22 unittest.unittest_notify -- common/autotest_common.sh@10 -- # set +x 00:03:27.135 ************************************ 00:03:27.135 END TEST unittest_notify 00:03:27.135 ************************************ 00:03:27.135 17:22:22 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:27.135 17:22:22 unittest -- unit/unittest.sh@248 -- # run_test unittest_nvme unittest_nvme 00:03:27.135 17:22:22 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:27.135 17:22:22 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:27.135 17:22:22 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:27.135 ************************************ 00:03:27.135 START TEST unittest_nvme 00:03:27.135 ************************************ 00:03:27.135 17:22:22 unittest.unittest_nvme -- common/autotest_common.sh@1123 -- # unittest_nvme 00:03:27.135 17:22:22 unittest.unittest_nvme -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:03:27.135 00:03:27.135 00:03:27.135 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.135 http://cunit.sourceforge.net/ 00:03:27.135 00:03:27.135 00:03:27.135 Suite: nvme 00:03:27.135 Test: test_opc_data_transfer ...passed 00:03:27.135 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:03:27.135 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:03:27.135 Test: test_trid_parse_and_compare ...[2024-07-15 17:22:22.920610] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1199:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:03:27.135 [2024-07-15 17:22:22.920800] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:03:27.135 [2024-07-15 17:22:22.920817] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1212:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:03:27.135 passed 00:03:27.135 Test: test_trid_trtype_str ...passed 00:03:27.135 Test: test_trid_adrfam_str ...passed 00:03:27.135 Test: test_nvme_ctrlr_probe ...[2024-07-15 17:22:22.920828] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:03:27.135 [2024-07-15 17:22:22.920839] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1222:parse_next_key: *ERROR*: Key without value 00:03:27.135 [2024-07-15 17:22:22.920848] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:03:27.135 passed 00:03:27.135 Test: test_spdk_nvme_probe ...passed 00:03:27.135 Test: test_spdk_nvme_connect ...passed 00:03:27.135 Test: test_nvme_ctrlr_probe_internal ...passed 00:03:27.135 Test: test_nvme_init_controllers ...passed 00:03:27.135 Test: test_nvme_driver_init ...[2024-07-15 17:22:22.920951] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:03:27.135 [2024-07-15 17:22:22.920975] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:03:27.135 [2024-07-15 17:22:22.920989] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:03:27.135 [2024-07-15 17:22:22.921001] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 822:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:03:27.135 [2024-07-15 17:22:22.921011] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:03:27.135 [2024-07-15 17:22:22.921038] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1010:spdk_nvme_connect: *ERROR*: No transport ID specified 00:03:27.135 [2024-07-15 17:22:22.921120] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:03:27.135 [2024-07-15 17:22:22.921145] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:03:27.135 [2024-07-15 17:22:22.921156] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:03:27.135 [2024-07-15 17:22:22.921170] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:03:27.135 [2024-07-15 17:22:22.921192] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:03:27.135 [2024-07-15 17:22:22.921203] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:03:27.395 [2024-07-15 17:22:23.034166] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:03:27.395 passed 00:03:27.395 Test: test_spdk_nvme_detach ...passed 00:03:27.395 Test: test_nvme_completion_poll_cb ...passed 00:03:27.395 Test: test_nvme_user_copy_cmd_complete ...passed 00:03:27.395 Test: test_nvme_allocate_request_null ...passed 00:03:27.395 Test: test_nvme_allocate_request ...passed 00:03:27.395 Test: test_nvme_free_request ...passed 00:03:27.395 Test: test_nvme_allocate_request_user_copy ...passed 00:03:27.395 Test: test_nvme_robust_mutex_init_shared ...passed 00:03:27.395 Test: test_nvme_request_check_timeout ...passed 00:03:27.395 Test: test_nvme_wait_for_completion ...passed 00:03:27.395 Test: test_spdk_nvme_parse_func ...passed 00:03:27.395 Test: test_spdk_nvme_detach_async ...passed 00:03:27.395 Test: test_nvme_parse_addr ...[2024-07-15 17:22:23.035180] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1609:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:03:27.395 passed 00:03:27.395 00:03:27.395 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.395 suites 1 1 n/a 0 0 00:03:27.395 tests 25 25 25 0 0 00:03:27.395 asserts 326 326 326 0 n/a 00:03:27.395 00:03:27.395 Elapsed time = 0.000 seconds 00:03:27.395 17:22:23 unittest.unittest_nvme -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:03:27.395 00:03:27.395 00:03:27.395 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.395 http://cunit.sourceforge.net/ 00:03:27.395 00:03:27.395 00:03:27.395 Suite: nvme_ctrlr 00:03:27.395 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-07-15 17:22:23.043554] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:27.395 passed 00:03:27.395 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-07-15 17:22:23.044968] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:27.395 passed 00:03:27.395 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-07-15 17:22:23.046125] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:27.395 passed 00:03:27.395 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-07-15 17:22:23.047270] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:27.395 passed 00:03:27.395 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-07-15 17:22:23.048427] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:27.396 [2024-07-15 17:22:23.049551] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-15 17:22:23.050680] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-15 17:22:23.051814] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:03:27.396 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-07-15 17:22:23.054071] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:27.396 [2024-07-15 17:22:23.056290] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-15 17:22:23.057423] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:03:27.396 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-07-15 17:22:23.059661] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:27.396 [2024-07-15 17:22:23.060776] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-15 17:22:23.062990] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:03:27.396 Test: test_nvme_ctrlr_init_delay ...[2024-07-15 17:22:23.065228] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:27.396 passed 00:03:27.396 Test: test_alloc_io_qpair_rr_1 ...[2024-07-15 17:22:23.066370] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:27.396 [2024-07-15 17:22:23.066408] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:03:27.396 passed 00:03:27.396 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:03:27.396 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:03:27.396 Test: test_alloc_io_qpair_wrr_1 ...passed 00:03:27.396 Test: test_alloc_io_qpair_wrr_2 ...passed 00:03:27.396 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-07-15 17:22:23.066426] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:03:27.396 [2024-07-15 17:22:23.066439] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:03:27.396 [2024-07-15 17:22:23.066452] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:03:27.396 [2024-07-15 17:22:23.066499] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:27.396 [2024-07-15 17:22:23.066526] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:27.396 [2024-07-15 17:22:23.066544] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:03:27.396 [2024-07-15 17:22:23.066576] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4993:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:03:27.396 [2024-07-15 17:22:23.066590] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5030:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:03:27.396 [2024-07-15 17:22:23.066604] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5070:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:03:27.396 passed 00:03:27.396 Test: test_nvme_ctrlr_fail ...passed 00:03:27.396 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:03:27.396 Test: test_nvme_ctrlr_set_supported_features ...passed 00:03:27.396 Test: test_nvme_ctrlr_set_host_feature ...[2024-07-15 17:22:23.066617] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5030:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:03:27.396 [2024-07-15 17:22:23.066633] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:03:27.396 [2024-07-15 17:22:23.066659] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:27.396 passed 00:03:27.396 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:03:27.396 Test: test_nvme_ctrlr_test_active_ns ...[2024-07-15 17:22:23.067813] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:27.396 passed 00:03:27.396 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:03:27.396 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:03:27.396 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:03:27.396 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-07-15 17:22:23.107961] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:27.396 passed 00:03:27.396 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-07-15 17:22:23.114561] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:27.396 passed 00:03:27.396 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-07-15 17:22:23.115694] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:27.396 [2024-07-15 17:22:23.115722] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3003:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:03:27.396 passed 00:03:27.396 Test: test_alloc_io_qpair_fail ...[2024-07-15 17:22:23.116839] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:27.396 [2024-07-15 17:22:23.116861] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 506:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:03:27.396 passed 00:03:27.396 Test: test_nvme_ctrlr_add_remove_process ...passed 00:03:27.396 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:03:27.396 Test: test_nvme_ctrlr_set_state ...passed 00:03:27.396 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-07-15 17:22:23.116893] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1547:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:03:27.396 [2024-07-15 17:22:23.116910] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:27.396 passed 00:03:27.396 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-07-15 17:22:23.119798] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:27.396 passed 00:03:27.396 Test: test_nvme_ctrlr_ns_mgmt ...[2024-07-15 17:22:23.126365] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:27.396 passed 00:03:27.396 Test: test_nvme_ctrlr_reset ...[2024-07-15 17:22:23.127523] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:27.396 passed 00:03:27.396 Test: test_nvme_ctrlr_aer_callback ...[2024-07-15 17:22:23.127584] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:27.396 passed 00:03:27.396 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-07-15 17:22:23.128725] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:27.396 passed 00:03:27.396 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:03:27.396 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:03:27.396 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-07-15 17:22:23.129950] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:27.396 passed 00:03:27.397 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:03:27.397 Test: test_nvme_ctrlr_ana_resize ...[2024-07-15 17:22:23.131094] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:27.397 passed 00:03:27.397 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:03:27.397 Test: test_nvme_transport_ctrlr_ready ...[2024-07-15 17:22:23.132255] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4152:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:03:27.397 passed 00:03:27.397 Test: test_nvme_ctrlr_disable ...[2024-07-15 17:22:23.132275] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4205:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 53 (error) 00:03:27.397 [2024-07-15 17:22:23.132292] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:03:27.397 passed 00:03:27.397 00:03:27.397 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.397 suites 1 1 n/a 0 0 00:03:27.397 tests 44 44 44 0 0 00:03:27.397 asserts 10434 10434 10434 0 n/a 00:03:27.397 00:03:27.397 Elapsed time = 0.039 seconds 00:03:27.397 17:22:23 unittest.unittest_nvme -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:03:27.397 00:03:27.397 00:03:27.397 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.397 http://cunit.sourceforge.net/ 00:03:27.397 00:03:27.397 00:03:27.397 Suite: nvme_ctrlr_cmd 00:03:27.397 Test: test_get_log_pages ...passed 00:03:27.397 Test: test_set_feature_cmd ...passed 00:03:27.397 Test: test_set_feature_ns_cmd ...passed 00:03:27.397 Test: test_get_feature_cmd ...passed 00:03:27.397 Test: test_get_feature_ns_cmd ...passed 00:03:27.397 Test: test_abort_cmd ...passed 00:03:27.397 Test: test_set_host_id_cmds ...[2024-07-15 17:22:23.143687] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:03:27.397 passed 00:03:27.397 Test: test_io_cmd_raw_no_payload_build ...passed 00:03:27.397 Test: test_io_raw_cmd ...passed 00:03:27.397 Test: test_io_raw_cmd_with_md ...passed 00:03:27.397 Test: test_namespace_attach ...passed 00:03:27.397 Test: test_namespace_detach ...passed 00:03:27.397 Test: test_namespace_create ...passed 00:03:27.397 Test: test_namespace_delete ...passed 00:03:27.397 Test: test_doorbell_buffer_config ...passed 00:03:27.397 Test: test_format_nvme ...passed 00:03:27.397 Test: test_fw_commit ...passed 00:03:27.397 Test: test_fw_image_download ...passed 00:03:27.397 Test: test_sanitize ...passed 00:03:27.397 Test: test_directive ...passed 00:03:27.397 Test: test_nvme_request_add_abort ...passed 00:03:27.397 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:03:27.397 Test: test_nvme_ctrlr_cmd_identify ...passed 00:03:27.397 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:03:27.397 00:03:27.397 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.397 suites 1 1 n/a 0 0 00:03:27.397 tests 24 24 24 0 0 00:03:27.397 asserts 198 198 198 0 n/a 00:03:27.397 00:03:27.397 Elapsed time = 0.000 seconds 00:03:27.397 17:22:23 unittest.unittest_nvme -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:03:27.397 00:03:27.397 00:03:27.397 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.397 http://cunit.sourceforge.net/ 00:03:27.397 00:03:27.397 00:03:27.397 Suite: nvme_ctrlr_cmd 00:03:27.397 Test: test_geometry_cmd ...passed 00:03:27.397 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:03:27.397 00:03:27.397 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.397 suites 1 1 n/a 0 0 00:03:27.397 tests 2 2 2 0 0 00:03:27.397 asserts 7 7 7 0 n/a 00:03:27.397 00:03:27.397 Elapsed time = 0.000 seconds 00:03:27.397 17:22:23 unittest.unittest_nvme -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:03:27.397 00:03:27.397 00:03:27.397 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.397 http://cunit.sourceforge.net/ 00:03:27.397 00:03:27.397 00:03:27.397 Suite: nvme 00:03:27.397 Test: test_nvme_ns_construct ...passed 00:03:27.397 Test: test_nvme_ns_uuid ...passed 00:03:27.397 Test: test_nvme_ns_csi ...passed 00:03:27.397 Test: test_nvme_ns_data ...passed 00:03:27.397 Test: test_nvme_ns_set_identify_data ...passed 00:03:27.397 Test: test_spdk_nvme_ns_get_values ...passed 00:03:27.397 Test: test_spdk_nvme_ns_is_active ...passed 00:03:27.397 Test: spdk_nvme_ns_supports ...passed 00:03:27.397 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:03:27.397 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:03:27.397 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:03:27.397 Test: test_nvme_ns_find_id_desc ...passed 00:03:27.397 00:03:27.397 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.397 suites 1 1 n/a 0 0 00:03:27.397 tests 12 12 12 0 0 00:03:27.397 asserts 95 95 95 0 n/a 00:03:27.397 00:03:27.397 Elapsed time = 0.000 seconds 00:03:27.397 17:22:23 unittest.unittest_nvme -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:03:27.397 00:03:27.397 00:03:27.397 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.397 http://cunit.sourceforge.net/ 00:03:27.397 00:03:27.397 00:03:27.397 Suite: nvme_ns_cmd 00:03:27.397 Test: split_test ...passed 00:03:27.397 Test: split_test2 ...passed 00:03:27.397 Test: split_test3 ...passed 00:03:27.397 Test: split_test4 ...passed 00:03:27.397 Test: test_nvme_ns_cmd_flush ...passed 00:03:27.397 Test: test_nvme_ns_cmd_dataset_management ...passed 00:03:27.397 Test: test_nvme_ns_cmd_copy ...passed 00:03:27.397 Test: test_io_flags ...passed 00:03:27.397 Test: test_nvme_ns_cmd_write_zeroes ...passed 00:03:27.397 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:03:27.397 Test: test_nvme_ns_cmd_reservation_register ...passed 00:03:27.397 Test: test_nvme_ns_cmd_reservation_release ...passed 00:03:27.397 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:03:27.397 Test: test_nvme_ns_cmd_reservation_report ...passed 00:03:27.397 Test: test_cmd_child_request ...passed 00:03:27.398 Test: test_nvme_ns_cmd_readv ...passed 00:03:27.398 Test: test_nvme_ns_cmd_read_with_md ...passed 00:03:27.398 Test: test_nvme_ns_cmd_writev ...[2024-07-15 17:22:23.161033] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:03:27.398 [2024-07-15 17:22:23.161286] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 292:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:03:27.398 passed 00:03:27.398 Test: test_nvme_ns_cmd_write_with_md ...passed 00:03:27.398 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:03:27.398 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:03:27.398 Test: test_nvme_ns_cmd_comparev ...passed 00:03:27.398 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:03:27.398 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:03:27.398 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:03:27.398 Test: test_nvme_ns_cmd_setup_request ...passed 00:03:27.398 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:03:27.398 Test: test_spdk_nvme_ns_cmd_writev_ext ...passed 00:03:27.398 Test: test_spdk_nvme_ns_cmd_readv_ext ...passed 00:03:27.398 Test: test_nvme_ns_cmd_verify ...passed 00:03:27.398 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:03:27.398 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:03:27.398 00:03:27.398 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.398 suites 1 1 n/a 0 0 00:03:27.398 tests 32 32 32 0 0 00:03:27.398 asserts 550 550 550 0 n/a 00:03:27.398 00:03:27.398 Elapsed time = 0.000 seconds 00:03:27.398 [2024-07-15 17:22:23.161401] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:03:27.398 [2024-07-15 17:22:23.161419] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:03:27.398 17:22:23 unittest.unittest_nvme -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:03:27.398 00:03:27.398 00:03:27.398 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.398 http://cunit.sourceforge.net/ 00:03:27.398 00:03:27.398 00:03:27.398 Suite: nvme_ns_cmd 00:03:27.398 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:03:27.398 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:03:27.398 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:03:27.398 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:03:27.398 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:03:27.398 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:03:27.398 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:03:27.398 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:03:27.398 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:03:27.398 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:03:27.398 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:03:27.398 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:03:27.398 00:03:27.398 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.398 suites 1 1 n/a 0 0 00:03:27.398 tests 12 12 12 0 0 00:03:27.398 asserts 123 123 123 0 n/a 00:03:27.398 00:03:27.398 Elapsed time = 0.000 seconds 00:03:27.398 17:22:23 unittest.unittest_nvme -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:03:27.398 00:03:27.398 00:03:27.398 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.398 http://cunit.sourceforge.net/ 00:03:27.398 00:03:27.398 00:03:27.398 Suite: nvme_qpair 00:03:27.398 Test: test3 ...passed 00:03:27.398 Test: test_ctrlr_failed ...passed 00:03:27.398 Test: struct_packing ...passed 00:03:27.398 Test: test_nvme_qpair_process_completions ...[2024-07-15 17:22:23.173049] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:03:27.398 [2024-07-15 17:22:23.173290] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:03:27.398 [2024-07-15 17:22:23.173367] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 805:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (Device not configured) on qpair id 0 00:03:27.398 passed 00:03:27.398 Test: test_nvme_completion_is_retry ...[2024-07-15 17:22:23.173393] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 805:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (Device not configured) on qpair id 1 00:03:27.398 passed 00:03:27.398 Test: test_get_status_string ...passed 00:03:27.398 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:03:27.398 Test: test_nvme_qpair_submit_request ...passed 00:03:27.398 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:03:27.398 Test: test_nvme_qpair_manual_complete_request ...passed 00:03:27.398 Test: test_nvme_qpair_init_deinit ...passed 00:03:27.398 Test: test_nvme_get_sgl_print_info ...passed 00:03:27.398 00:03:27.398 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.398 suites 1 1 n/a 0 0 00:03:27.398 tests 12 12 12 0 0 00:03:27.398 asserts 154 154 154 0 n/a 00:03:27.398 00:03:27.398 Elapsed time = 0.000 seconds 00:03:27.398 [2024-07-15 17:22:23.173471] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:03:27.398 17:22:23 unittest.unittest_nvme -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:03:27.398 00:03:27.398 00:03:27.398 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.398 http://cunit.sourceforge.net/ 00:03:27.398 00:03:27.398 00:03:27.398 Suite: nvme_pcie 00:03:27.398 Test: test_prp_list_append ...passed 00:03:27.398 Test: test_nvme_pcie_hotplug_monitor ...passed 00:03:27.398 Test: test_shadow_doorbell_update ...passed 00:03:27.398 Test: test_build_contig_hw_sgl_request ...passed 00:03:27.398 Test: test_nvme_pcie_qpair_build_metadata ...[2024-07-15 17:22:23.178747] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:03:27.398 [2024-07-15 17:22:23.178970] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1234:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:03:27.398 [2024-07-15 17:22:23.178988] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1224:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:03:27.398 [2024-07-15 17:22:23.179040] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:03:27.398 [2024-07-15 17:22:23.179065] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:03:27.398 passed 00:03:27.398 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:03:27.398 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:03:27.398 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:03:27.398 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:03:27.398 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:03:27.398 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:03:27.398 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:03:27.398 Test: test_nvme_pcie_ctrlr_config_pmr ...passed 00:03:27.398 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:03:27.398 00:03:27.398 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.398 suites 1 1 n/a 0 0 00:03:27.398 tests 14 14 14 0 0 00:03:27.398 asserts 235 235 235 0 n/a 00:03:27.398 00:03:27.398 Elapsed time = 0.000 seconds 00:03:27.398 [2024-07-15 17:22:23.179172] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:03:27.398 [2024-07-15 17:22:23.179212] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:03:27.398 [2024-07-15 17:22:23.179232] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:03:27.398 [2024-07-15 17:22:23.179250] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:03:27.398 [2024-07-15 17:22:23.179265] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:03:27.398 17:22:23 unittest.unittest_nvme -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:03:27.398 00:03:27.398 00:03:27.398 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.398 http://cunit.sourceforge.net/ 00:03:27.398 00:03:27.398 00:03:27.398 Suite: nvme_ns_cmd 00:03:27.398 Test: nvme_poll_group_create_test ...passed 00:03:27.398 Test: nvme_poll_group_add_remove_test ...passed 00:03:27.398 Test: nvme_poll_group_process_completions ...passed 00:03:27.398 Test: nvme_poll_group_destroy_test ...passed 00:03:27.398 Test: nvme_poll_group_get_free_stats ...passed 00:03:27.398 00:03:27.398 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.398 suites 1 1 n/a 0 0 00:03:27.399 tests 5 5 5 0 0 00:03:27.399 asserts 75 75 75 0 n/a 00:03:27.399 00:03:27.399 Elapsed time = 0.000 seconds 00:03:27.399 17:22:23 unittest.unittest_nvme -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:03:27.399 00:03:27.399 00:03:27.399 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.399 http://cunit.sourceforge.net/ 00:03:27.399 00:03:27.399 00:03:27.399 Suite: nvme_quirks 00:03:27.399 Test: test_nvme_quirks_striping ...passed 00:03:27.399 00:03:27.399 Run Summary: Type Total Ran Passed Failed Inactive 00:03:27.399 suites 1 1 n/a 0 0 00:03:27.399 tests 1 1 1 0 0 00:03:27.399 asserts 5 5 5 0 n/a 00:03:27.399 00:03:27.399 Elapsed time = 0.000 seconds 00:03:27.399 17:22:23 unittest.unittest_nvme -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:03:27.399 00:03:27.399 00:03:27.399 CUnit - A unit testing framework for C - Version 2.1-3 00:03:27.399 http://cunit.sourceforge.net/ 00:03:27.399 00:03:27.399 00:03:27.399 Suite: nvme_tcp 00:03:27.399 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:03:27.399 Test: test_nvme_tcp_build_iovs ...passed 00:03:27.399 Test: test_nvme_tcp_build_sgl_request ...passed 00:03:27.399 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:03:27.399 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:03:27.399 Test: test_nvme_tcp_req_complete_safe ...passed 00:03:27.399 Test: test_nvme_tcp_req_get ...passed 00:03:27.399 Test: test_nvme_tcp_req_init ...passed 00:03:27.399 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:03:27.399 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:03:27.399 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:03:27.399 Test: test_nvme_tcp_alloc_reqs ...passed 00:03:27.399 Test: test_nvme_tcp_qpair_send_h2c_term_req ...passed 00:03:27.399 Test: test_nvme_tcp_pdu_ch_handle ...[2024-07-15 17:22:23.196728] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 849:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x820c7e1a8, and the iovcnt=16, remaining_size=28672 00:03:27.399 [2024-07-15 17:22:23.196961] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c7fd58 is same with the state(6) to be set 00:03:27.399 [2024-07-15 17:22:23.197000] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c7fd58 is same with the state(5) to be set 00:03:27.399 [2024-07-15 17:22:23.197012] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1190:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x820c7f4e8 00:03:27.399 [2024-07-15 17:22:23.197022] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1250:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:03:27.399 [2024-07-15 17:22:23.197031] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c7fd58 is same with the state(5) to be set 00:03:27.399 passed 00:03:27.399 Test: test_nvme_tcp_qpair_connect_sock ...[2024-07-15 17:22:23.197039] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1200:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:03:27.399 [2024-07-15 17:22:23.197053] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c7fd58 is same with the state(5) to be set 00:03:27.399 [2024-07-15 17:22:23.197062] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:03:27.399 [2024-07-15 17:22:23.197070] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c7fd58 is same with the state(5) to be set 00:03:27.399 [2024-07-15 17:22:23.197080] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c7fd58 is same with the state(5) to be set 00:03:27.399 [2024-07-15 17:22:23.197088] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c7fd58 is same with the state(5) to be set 00:03:27.399 [2024-07-15 17:22:23.197213] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c7fd58 is same with the state(5) to be set 00:03:27.399 [2024-07-15 17:22:23.197222] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c7fd58 is same with the state(5) to be set 00:03:27.399 [2024-07-15 17:22:23.197231] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c7fd58 is same with the state(5) to be set 00:03:27.399 [2024-07-15 17:22:23.197264] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:03:27.399 [2024-07-15 17:22:23.197274] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2345:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:03:45.495 passed 00:03:45.495 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:03:45.495 Test: test_nvme_tcp_c2h_payload_handle ...passed 00:03:45.495 Test: test_nvme_tcp_icresp_handle ...[2024-07-15 17:22:38.535288] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2345:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:03:45.495 [2024-07-15 17:22:38.535407] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1358:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x820c7f920): PDU Sequence Error 00:03:45.495 [2024-07-15 17:22:38.535435] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1576:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:03:45.495 [2024-07-15 17:22:38.535453] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1584:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:03:45.495 [2024-07-15 17:22:38.535469] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c7fd58 is same with the state(5) to be set 00:03:45.495 [2024-07-15 17:22:38.535485] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1592:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:03:45.495 passed 00:03:45.495 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:03:45.495 Test: test_nvme_tcp_capsule_resp_hdr_handle ...passed 00:03:45.495 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:03:45.495 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...passed 00:03:45.495 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-07-15 17:22:38.535500] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c7fd58 is same with the state(5) to be set 00:03:45.495 [2024-07-15 17:22:38.535516] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c7fd58 is same with the state(0) to be set 00:03:45.495 [2024-07-15 17:22:38.535538] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1358:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x820c7f920): PDU Sequence Error 00:03:45.495 [2024-07-15 17:22:38.535571] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1653:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x820c7fd58 00:03:45.495 [2024-07-15 17:22:38.535629] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 358:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x820c7dab8, errno=0, rc=0 00:03:45.495 [2024-07-15 17:22:38.535647] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c7dab8 is same with the state(5) to be set 00:03:45.495 [2024-07-15 17:22:38.535662] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c7dab8 is same with the state(5) to be set 00:03:45.495 [2024-07-15 17:22:38.535745] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2186:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x820c7dab8 (0): No error: 0 00:03:45.495 [2024-07-15 17:22:38.535762] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2186:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x820c7dab8 (0): No error: 0 00:03:45.495 passed 00:03:45.495 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:03:45.495 Test: test_nvme_tcp_poll_group_get_stats ...passed 00:03:45.495 Test: test_nvme_tcp_ctrlr_construct ...[2024-07-15 17:22:38.619430] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2517:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:03:45.495 [2024-07-15 17:22:38.619496] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2517:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:03:45.495 [2024-07-15 17:22:38.619536] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2964:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:45.495 [2024-07-15 17:22:38.619546] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2964:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:45.495 [2024-07-15 17:22:38.619590] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2517:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:03:45.495 [2024-07-15 17:22:38.619600] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:03:45.495 [2024-07-15 17:22:38.619613] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:03:45.495 [2024-07-15 17:22:38.619622] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:03:45.495 passed 00:03:45.495 Test: test_nvme_tcp_qpair_submit_request ...passed 00:03:45.495 00:03:45.495 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.495 suites 1 1 n/a 0 0 00:03:45.495 tests 27 27 27 0 0 00:03:45.495 asserts 624 624 624 0 n/a 00:03:45.495 00:03:45.495 Elapsed time = 0.078 seconds 00:03:45.495 [2024-07-15 17:22:38.619645] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2384:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee06966b000 with addr=192.168.1.78, port=23 00:03:45.496 [2024-07-15 17:22:38.619655] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:03:45.496 [2024-07-15 17:22:38.619676] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 849:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x1ee069639180, and the iovcnt=1, remaining_size=1024 00:03:45.496 [2024-07-15 17:22:38.619685] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1035:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:03:45.496 17:22:38 unittest.unittest_nvme -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:03:45.496 00:03:45.496 00:03:45.496 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.496 http://cunit.sourceforge.net/ 00:03:45.496 00:03:45.496 00:03:45.496 Suite: nvme_transport 00:03:45.496 Test: test_nvme_get_transport ...passed 00:03:45.496 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:03:45.496 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:03:45.496 Test: test_nvme_transport_poll_group_add_remove ...passed 00:03:45.496 Test: test_ctrlr_get_memory_domains ...passed 00:03:45.496 00:03:45.496 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.496 suites 1 1 n/a 0 0 00:03:45.496 tests 5 5 5 0 0 00:03:45.496 asserts 28 28 28 0 n/a 00:03:45.496 00:03:45.496 Elapsed time = 0.000 seconds 00:03:45.496 17:22:38 unittest.unittest_nvme -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:03:45.496 00:03:45.496 00:03:45.496 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.496 http://cunit.sourceforge.net/ 00:03:45.496 00:03:45.496 00:03:45.496 Suite: nvme_io_msg 00:03:45.496 Test: test_nvme_io_msg_send ...passed 00:03:45.496 Test: test_nvme_io_msg_process ...passed 00:03:45.496 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:03:45.496 00:03:45.496 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.496 suites 1 1 n/a 0 0 00:03:45.496 tests 3 3 3 0 0 00:03:45.496 asserts 56 56 56 0 n/a 00:03:45.496 00:03:45.496 Elapsed time = 0.000 seconds 00:03:45.496 17:22:38 unittest.unittest_nvme -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:03:45.496 00:03:45.496 00:03:45.496 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.496 http://cunit.sourceforge.net/ 00:03:45.496 00:03:45.496 00:03:45.496 Suite: nvme_pcie_common 00:03:45.496 Test: test_nvme_pcie_ctrlr_alloc_cmb ...passed 00:03:45.496 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:03:45.496 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:03:45.496 Test: test_nvme_pcie_ctrlr_connect_qpair ...passed 00:03:45.496 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-07-15 17:22:38.639335] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:03:45.496 [2024-07-15 17:22:38.639552] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 504:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:03:45.496 [2024-07-15 17:22:38.639570] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 457:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:03:45.496 [2024-07-15 17:22:38.639580] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 551:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:03:45.496 passed 00:03:45.496 Test: test_nvme_pcie_poll_group_get_stats ...passed 00:03:45.496 00:03:45.496 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.496 suites 1 1 n/a 0 0 00:03:45.496 tests 6 6 6 0 0 00:03:45.496 asserts 148 148 148 0 n/a 00:03:45.496 00:03:45.496 Elapsed time = 0.000 seconds[2024-07-15 17:22:38.639677] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:45.496 [2024-07-15 17:22:38.639690] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:45.496 00:03:45.496 17:22:38 unittest.unittest_nvme -- unit/unittest.sh@103 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:03:45.496 00:03:45.496 00:03:45.496 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.496 http://cunit.sourceforge.net/ 00:03:45.496 00:03:45.496 00:03:45.496 Suite: nvme_fabric 00:03:45.496 Test: test_nvme_fabric_prop_set_cmd ...passed 00:03:45.496 Test: test_nvme_fabric_prop_get_cmd ...passed 00:03:45.496 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:03:45.496 Test: test_nvme_fabric_discover_probe ...passed 00:03:45.496 Test: test_nvme_fabric_qpair_connect ...[2024-07-15 17:22:38.644795] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 607:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -85, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:03:45.496 passed 00:03:45.496 00:03:45.496 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.496 suites 1 1 n/a 0 0 00:03:45.496 tests 5 5 5 0 0 00:03:45.496 asserts 60 60 60 0 n/a 00:03:45.496 00:03:45.496 Elapsed time = 0.000 seconds 00:03:45.496 17:22:38 unittest.unittest_nvme -- unit/unittest.sh@104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:03:45.496 00:03:45.496 00:03:45.496 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.496 http://cunit.sourceforge.net/ 00:03:45.496 00:03:45.496 00:03:45.496 Suite: nvme_opal 00:03:45.496 Test: test_opal_nvme_security_recv_send_done ...passed 00:03:45.496 Test: test_opal_add_short_atom_header ...passed 00:03:45.496 00:03:45.496 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.496 suites 1 1 n/a 0 0 00:03:45.496 tests 2 2 2 0 0 00:03:45.496 asserts 22 22 22 0 n/a 00:03:45.496 00:03:45.496 Elapsed time = 0.000 seconds[2024-07-15 17:22:38.650160] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:03:45.496 00:03:45.496 00:03:45.496 real 0m15.735s 00:03:45.496 user 0m0.100s 00:03:45.496 sys 0m0.146s 00:03:45.496 17:22:38 unittest.unittest_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.496 17:22:38 unittest.unittest_nvme -- common/autotest_common.sh@10 -- # set +x 00:03:45.496 ************************************ 00:03:45.496 END TEST unittest_nvme 00:03:45.496 ************************************ 00:03:45.496 17:22:38 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:45.496 17:22:38 unittest -- unit/unittest.sh@249 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:03:45.496 17:22:38 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.496 17:22:38 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.496 17:22:38 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:45.496 ************************************ 00:03:45.496 START TEST unittest_log 00:03:45.496 ************************************ 00:03:45.496 17:22:38 unittest.unittest_log -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:03:45.496 00:03:45.496 00:03:45.496 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.496 http://cunit.sourceforge.net/ 00:03:45.496 00:03:45.496 00:03:45.496 Suite: log 00:03:45.496 Test: log_test ...[2024-07-15 17:22:38.697246] log_ut.c: 56:log_test: *WARNING*: log warning unit test 00:03:45.496 [2024-07-15 17:22:38.697500] log_ut.c: 57:log_test: *DEBUG*: log test 00:03:45.496 log dump test: 00:03:45.496 passed 00:03:45.496 Test: deprecation ...00000000 6c 6f 67 20 64 75 6d 70 log dump 00:03:45.496 spdk dump test: 00:03:45.496 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:03:45.496 spdk dump test: 00:03:45.496 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:03:45.496 00000010 65 20 63 68 61 72 73 e chars 00:03:45.496 passed 00:03:45.496 00:03:45.496 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.496 suites 1 1 n/a 0 0 00:03:45.496 tests 2 2 2 0 0 00:03:45.496 asserts 73 73 73 0 n/a 00:03:45.496 00:03:45.496 Elapsed time = 0.000 seconds 00:03:45.496 00:03:45.496 real 0m1.012s 00:03:45.496 user 0m0.006s 00:03:45.496 sys 0m0.005s 00:03:45.496 17:22:39 unittest.unittest_log -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.496 17:22:39 unittest.unittest_log -- common/autotest_common.sh@10 -- # set +x 00:03:45.496 ************************************ 00:03:45.496 END TEST unittest_log 00:03:45.496 ************************************ 00:03:45.496 17:22:39 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:45.496 17:22:39 unittest -- unit/unittest.sh@250 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:03:45.496 17:22:39 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.496 17:22:39 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.496 17:22:39 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:45.496 ************************************ 00:03:45.496 START TEST unittest_lvol 00:03:45.496 ************************************ 00:03:45.496 17:22:39 unittest.unittest_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:03:45.496 00:03:45.496 00:03:45.496 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.496 http://cunit.sourceforge.net/ 00:03:45.496 00:03:45.496 00:03:45.496 Suite: lvol 00:03:45.496 Test: lvs_init_unload_success ...[2024-07-15 17:22:39.755421] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:03:45.496 passed 00:03:45.496 Test: lvs_init_destroy_success ...passed 00:03:45.496 Test: lvs_init_opts_success ...passed 00:03:45.496 Test: lvs_unload_lvs_is_null_fail ...passed 00:03:45.496 Test: lvs_names ...[2024-07-15 17:22:39.755697] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:03:45.496 [2024-07-15 17:22:39.755746] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:03:45.496 [2024-07-15 17:22:39.755769] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:03:45.497 [2024-07-15 17:22:39.755789] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:03:45.497 [2024-07-15 17:22:39.755818] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:03:45.497 passed 00:03:45.497 Test: lvol_create_destroy_success ...passed 00:03:45.497 Test: lvol_create_fail ...passed 00:03:45.497 Test: lvol_destroy_fail ...passed 00:03:45.497 Test: lvol_close ...passed 00:03:45.497 Test: lvol_resize ...passed 00:03:45.497 Test: lvol_set_read_only ...[2024-07-15 17:22:39.755892] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:03:45.497 [2024-07-15 17:22:39.755917] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:03:45.497 [2024-07-15 17:22:39.755960] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:03:45.497 [2024-07-15 17:22:39.755994] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:03:45.497 [2024-07-15 17:22:39.756010] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:03:45.497 passed 00:03:45.497 Test: test_lvs_load ...passed 00:03:45.497 Test: lvols_load ...passed 00:03:45.497 Test: lvol_open ...passed 00:03:45.497 Test: lvol_snapshot ...passed 00:03:45.497 Test: lvol_snapshot_fail ...passed 00:03:45.497 Test: lvol_clone ...[2024-07-15 17:22:39.756086] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:03:45.497 [2024-07-15 17:22:39.756105] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:03:45.497 [2024-07-15 17:22:39.756138] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:03:45.497 [2024-07-15 17:22:39.756181] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:03:45.497 [2024-07-15 17:22:39.756306] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:03:45.497 passed 00:03:45.497 Test: lvol_clone_fail ...passed 00:03:45.497 Test: lvol_iter_clones ...passed 00:03:45.497 Test: lvol_refcnt ...passed 00:03:45.497 Test: lvol_names ...passed 00:03:45.497 Test: lvol_create_thin_provisioned ...passed 00:03:45.497 Test: lvol_rename ...passed 00:03:45.497 Test: lvs_rename ...passed 00:03:45.497 Test: lvol_inflate ...passed 00:03:45.497 Test: lvol_decouple_parent ...passed 00:03:45.497 Test: lvol_get_xattr ...passed 00:03:45.497 Test: lvol_esnap_reload ...[2024-07-15 17:22:39.756353] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:03:45.497 [2024-07-15 17:22:39.756386] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol d648c3bf-42ce-11ef-96ac-773515fba644 because it is still open 00:03:45.497 [2024-07-15 17:22:39.756410] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:03:45.497 [2024-07-15 17:22:39.756422] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:03:45.497 [2024-07-15 17:22:39.756438] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:03:45.497 [2024-07-15 17:22:39.756470] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:03:45.497 [2024-07-15 17:22:39.756483] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:03:45.497 [2024-07-15 17:22:39.756505] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:03:45.497 [2024-07-15 17:22:39.756522] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:03:45.497 [2024-07-15 17:22:39.756539] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:03:45.497 passed 00:03:45.497 Test: lvol_esnap_create_bad_args ...passed 00:03:45.497 Test: lvol_esnap_create_delete ...passed 00:03:45.497 Test: lvol_esnap_load_esnaps ...passed 00:03:45.497 Test: lvol_esnap_missing ...passed 00:03:45.497 Test: lvol_esnap_hotplug ... 00:03:45.497 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:03:45.497 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:03:45.497 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:03:45.497 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:03:45.497 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:03:45.497 [2024-07-15 17:22:39.756573] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:03:45.497 [2024-07-15 17:22:39.756583] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:03:45.497 [2024-07-15 17:22:39.756592] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1260:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:03:45.497 [2024-07-15 17:22:39.756604] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:03:45.497 [2024-07-15 17:22:39.756625] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:03:45.497 [2024-07-15 17:22:39.756652] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1833:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:03:45.497 [2024-07-15 17:22:39.756671] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:03:45.497 [2024-07-15 17:22:39.756680] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:03:45.497 [2024-07-15 17:22:39.756756] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol d648d222-42ce-11ef-96ac-773515fba644: failed to create esnap bs_dev: error -12 00:03:45.497 [2024-07-15 17:22:39.756790] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol d648d360-42ce-11ef-96ac-773515fba644: failed to create esnap bs_dev: error -12 00:03:45.497 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:03:45.497 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:03:45.497 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:03:45.497 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:03:45.497 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:03:45.497 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:03:45.497 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:03:45.497 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:03:45.497 passed 00:03:45.497 Test: lvol_get_by ...passed 00:03:45.497 Test: lvol_shallow_copy ...passed 00:03:45.497 Test: lvol_set_parent ...passed 00:03:45.497 Test: lvol_set_external_parent ...passed 00:03:45.497 00:03:45.497 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.497 suites 1 1 n/a 0 0 00:03:45.497 tests 37 37 37 0 0 00:03:45.497 asserts 1505 1505 1505 0 n/a 00:03:45.497 00:03:45.497 Elapsed time = 0.000 seconds 00:03:45.497 [2024-07-15 17:22:39.756811] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol d648d451-42ce-11ef-96ac-773515fba644: failed to create esnap bs_dev: error -12 00:03:45.497 [2024-07-15 17:22:39.756950] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2274:spdk_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:03:45.497 [2024-07-15 17:22:39.756959] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2281:spdk_lvol_shallow_copy: *ERROR*: lvol d648d9c4-42ce-11ef-96ac-773515fba644 shallow copy, ext_dev must not be NULL 00:03:45.497 [2024-07-15 17:22:39.756982] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2338:spdk_lvol_set_parent: *ERROR*: lvol must not be NULL 00:03:45.497 [2024-07-15 17:22:39.756990] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2344:spdk_lvol_set_parent: *ERROR*: snapshot must not be NULL 00:03:45.497 [2024-07-15 17:22:39.757007] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2393:spdk_lvol_set_external_parent: *ERROR*: lvol must not be NULL 00:03:45.497 [2024-07-15 17:22:39.757016] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2399:spdk_lvol_set_external_parent: *ERROR*: snapshot must not be NULL 00:03:45.497 [2024-07-15 17:22:39.757025] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2406:spdk_lvol_set_external_parent: *ERROR*: lvol lvol and esnap have the same UUID 00:03:45.497 00:03:45.497 real 0m0.009s 00:03:45.497 user 0m0.006s 00:03:45.497 sys 0m0.000s 00:03:45.497 17:22:39 unittest.unittest_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.497 17:22:39 unittest.unittest_lvol -- common/autotest_common.sh@10 -- # set +x 00:03:45.497 ************************************ 00:03:45.497 END TEST unittest_lvol 00:03:45.497 ************************************ 00:03:45.497 17:22:39 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:45.497 17:22:39 unittest -- unit/unittest.sh@251 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:45.497 17:22:39 unittest -- unit/unittest.sh@252 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:03:45.497 17:22:39 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.497 17:22:39 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.497 17:22:39 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:45.497 ************************************ 00:03:45.497 START TEST unittest_nvme_rdma 00:03:45.497 ************************************ 00:03:45.497 17:22:39 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:03:45.497 00:03:45.497 00:03:45.497 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.497 http://cunit.sourceforge.net/ 00:03:45.497 00:03:45.497 00:03:45.497 Suite: nvme_rdma 00:03:45.497 Test: test_nvme_rdma_build_sgl_request ...[2024-07-15 17:22:39.807717] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:03:45.497 passed 00:03:45.497 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:03:45.497 Test: test_nvme_rdma_build_contig_request ...[2024-07-15 17:22:39.807999] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1553:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:03:45.497 [2024-07-15 17:22:39.808025] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1609:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:03:45.497 [2024-07-15 17:22:39.808056] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1490:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:03:45.497 passed 00:03:45.497 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:03:45.497 Test: test_nvme_rdma_create_reqs ...passed 00:03:45.497 Test: test_nvme_rdma_create_rsps ...passed 00:03:45.497 Test: test_nvme_rdma_ctrlr_create_qpair ...passed 00:03:45.497 Test: test_nvme_rdma_poller_create ...[2024-07-15 17:22:39.808089] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 931:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:03:45.497 [2024-07-15 17:22:39.808142] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 849:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:03:45.497 [2024-07-15 17:22:39.808180] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1747:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:03:45.497 [2024-07-15 17:22:39.808200] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1747:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:03:45.497 passed 00:03:45.497 Test: test_nvme_rdma_qpair_process_cm_event ...passed 00:03:45.498 Test: test_nvme_rdma_ctrlr_construct ...passed 00:03:45.498 Test: test_nvme_rdma_req_put_and_get ...passed 00:03:45.498 Test: test_nvme_rdma_req_init ...passed 00:03:45.498 Test: test_nvme_rdma_validate_cm_event ...[2024-07-15 17:22:39.808275] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 450:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:03:45.498 [2024-07-15 17:22:39.808367] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 544:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:03:45.498 passed 00:03:45.498 Test: test_nvme_rdma_qpair_init ...passed 00:03:45.498 Test: test_nvme_rdma_qpair_submit_request ...passed 00:03:45.498 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:03:45.498 Test: test_rdma_get_memory_translation ...passed 00:03:45.498 Test: test_get_rdma_qpair_from_wc ...passed 00:03:45.498 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:03:45.498 Test: test_nvme_rdma_poll_group_get_stats ...passed 00:03:45.498 Test: test_nvme_rdma_qpair_set_poller ...[2024-07-15 17:22:39.808385] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 544:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:03:45.498 [2024-07-15 17:22:39.808421] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1368:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:03:45.498 [2024-07-15 17:22:39.808436] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:03:45.498 [2024-07-15 17:22:39.808474] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3204:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:45.498 [2024-07-15 17:22:39.808489] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3204:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:03:45.498 [2024-07-15 17:22:39.808523] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2916:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 0. 00:03:45.498 [2024-07-15 17:22:39.808539] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2962:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:03:45.498 [2024-07-15 17:22:39.808555] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 647:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x820e9e658 on poll group 0x363553c72000 00:03:45.498 [2024-07-15 17:22:39.808570] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2916:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 0. 00:03:45.498 [2024-07-15 17:22:39.808585] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2962:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0x0 00:03:45.498 [2024-07-15 17:22:39.808599] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 647:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x820e9e658 on poll group 0x363553c72000 00:03:45.498 passed 00:03:45.498 00:03:45.498 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.498 suites 1 1 n/a 0 0 00:03:45.498 tests 21 21 21 0 0 00:03:45.498 asserts 397 397 397 0 n/a 00:03:45.498 00:03:45.498 Elapsed time = 0.000 seconds 00:03:45.498 [2024-07-15 17:22:39.808687] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 625:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 0: No error: 0 00:03:45.498 00:03:45.498 real 0m0.008s 00:03:45.498 user 0m0.000s 00:03:45.498 sys 0m0.008s 00:03:45.498 17:22:39 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.498 17:22:39 unittest.unittest_nvme_rdma -- common/autotest_common.sh@10 -- # set +x 00:03:45.498 ************************************ 00:03:45.498 END TEST unittest_nvme_rdma 00:03:45.498 ************************************ 00:03:45.498 17:22:39 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:45.498 17:22:39 unittest -- unit/unittest.sh@253 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:03:45.498 17:22:39 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.498 17:22:39 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.498 17:22:39 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:45.498 ************************************ 00:03:45.498 START TEST unittest_nvmf_transport 00:03:45.498 ************************************ 00:03:45.498 17:22:39 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:03:45.498 00:03:45.498 00:03:45.498 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.498 http://cunit.sourceforge.net/ 00:03:45.498 00:03:45.498 00:03:45.498 Suite: nvmf 00:03:45.498 Test: test_spdk_nvmf_transport_create ...[2024-07-15 17:22:39.856549] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 251:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:03:45.498 [2024-07-15 17:22:39.856770] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:03:45.498 [2024-07-15 17:22:39.856792] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 276:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:03:45.498 passed 00:03:45.498 Test: test_nvmf_transport_poll_group_create ...passed 00:03:45.498 Test: test_spdk_nvmf_transport_opts_init ...[2024-07-15 17:22:39.856835] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 259:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:03:45.498 passed 00:03:45.498 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:03:45.498 00:03:45.498 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.498 suites 1 1 n/a 0 0 00:03:45.498 tests 4 4 4 0 0 00:03:45.498 asserts 49 49 49 0 n/a 00:03:45.498 00:03:45.498 Elapsed time = 0.000 seconds 00:03:45.498 [2024-07-15 17:22:39.856874] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 792:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:03:45.498 [2024-07-15 17:22:39.856888] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 797:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:03:45.498 [2024-07-15 17:22:39.856900] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 802:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:03:45.498 00:03:45.498 real 0m0.006s 00:03:45.498 user 0m0.000s 00:03:45.498 sys 0m0.008s 00:03:45.498 17:22:39 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.498 17:22:39 unittest.unittest_nvmf_transport -- common/autotest_common.sh@10 -- # set +x 00:03:45.498 ************************************ 00:03:45.498 END TEST unittest_nvmf_transport 00:03:45.498 ************************************ 00:03:45.498 17:22:39 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:45.498 17:22:39 unittest -- unit/unittest.sh@254 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:03:45.498 17:22:39 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.498 17:22:39 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.498 17:22:39 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:45.498 ************************************ 00:03:45.498 START TEST unittest_rdma 00:03:45.498 ************************************ 00:03:45.498 17:22:39 unittest.unittest_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:03:45.498 00:03:45.498 00:03:45.498 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.498 http://cunit.sourceforge.net/ 00:03:45.498 00:03:45.498 00:03:45.498 Suite: rdma_common 00:03:45.498 Test: test_spdk_rdma_pd ...[2024-07-15 17:22:39.900950] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 398:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:03:45.498 passed 00:03:45.498 00:03:45.498 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.498 suites 1 1 n/a 0 0 00:03:45.498 tests 1 1 1 0 0 00:03:45.498 asserts 31 31 31 0 n/a 00:03:45.498 00:03:45.498 Elapsed time = 0.000 seconds 00:03:45.498 [2024-07-15 17:22:39.901131] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 398:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:03:45.498 00:03:45.498 real 0m0.004s 00:03:45.498 user 0m0.000s 00:03:45.498 sys 0m0.004s 00:03:45.498 17:22:39 unittest.unittest_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.498 ************************************ 00:03:45.498 END TEST unittest_rdma 00:03:45.498 ************************************ 00:03:45.498 17:22:39 unittest.unittest_rdma -- common/autotest_common.sh@10 -- # set +x 00:03:45.498 17:22:39 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:45.498 17:22:39 unittest -- unit/unittest.sh@257 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:45.498 17:22:39 unittest -- unit/unittest.sh@261 -- # run_test unittest_nvmf unittest_nvmf 00:03:45.498 17:22:39 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.498 17:22:39 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.498 17:22:39 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:45.498 ************************************ 00:03:45.498 START TEST unittest_nvmf 00:03:45.498 ************************************ 00:03:45.498 17:22:39 unittest.unittest_nvmf -- common/autotest_common.sh@1123 -- # unittest_nvmf 00:03:45.498 17:22:39 unittest.unittest_nvmf -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:03:45.498 00:03:45.498 00:03:45.498 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.498 http://cunit.sourceforge.net/ 00:03:45.498 00:03:45.498 00:03:45.498 Suite: nvmf 00:03:45.498 Test: test_get_log_page ...passed 00:03:45.498 Test: test_process_fabrics_cmd ...passed 00:03:45.498 Test: test_connect ...[2024-07-15 17:22:39.946142] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:03:45.498 [2024-07-15 17:22:39.946360] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4731:nvmf_check_qpair_active: *ERROR*: Received command 0x0 on qid 0 before CONNECT 00:03:45.498 passed 00:03:45.498 Test: test_get_ns_id_desc_list ...[2024-07-15 17:22:39.946441] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1012:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:03:45.498 [2024-07-15 17:22:39.946457] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 875:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:03:45.498 [2024-07-15 17:22:39.946471] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1051:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:03:45.498 [2024-07-15 17:22:39.946482] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:03:45.498 [2024-07-15 17:22:39.946494] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 886:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:03:45.498 [2024-07-15 17:22:39.946505] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 894:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:03:45.498 [2024-07-15 17:22:39.946517] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 900:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:03:45.498 [2024-07-15 17:22:39.946528] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 926:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:03:45.498 [2024-07-15 17:22:39.946544] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:03:45.499 [2024-07-15 17:22:39.946563] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 676:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:03:45.499 [2024-07-15 17:22:39.946587] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 682:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:03:45.499 [2024-07-15 17:22:39.946601] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 689:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:03:45.499 [2024-07-15 17:22:39.946614] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 696:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:03:45.499 [2024-07-15 17:22:39.946627] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 720:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:03:45.499 [2024-07-15 17:22:39.946656] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 295:nvmf_ctrlr_add_qpair: *ERROR*: Got I/O connect with duplicate QID 1 (cntlid:0) 00:03:45.499 [2024-07-15 17:22:39.946699] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 4, group 0x0) 00:03:45.499 [2024-07-15 17:22:39.946713] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group 0x0) 00:03:45.499 passed 00:03:45.499 Test: test_identify_ns ...[2024-07-15 17:22:39.946769] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:03:45.499 passed 00:03:45.499 Test: test_identify_ns_iocs_specific ...passed 00:03:45.499 Test: test_reservation_write_exclusive ...passed 00:03:45.499 Test: test_reservation_exclusive_access ...passed 00:03:45.499 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:03:45.499 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:03:45.499 Test: test_reservation_notification_log_page ...passed 00:03:45.499 Test: test_get_dif_ctx ...passed 00:03:45.499 Test: test_set_get_features ...passed 00:03:45.499 Test: test_identify_ctrlr ...passed 00:03:45.499 Test: test_identify_ctrlr_iocs_specific ...passed 00:03:45.499 Test: test_custom_admin_cmd ...passed 00:03:45.499 Test: test_fused_compare_and_write ...passed 00:03:45.499 Test: test_multi_async_event_reqs ...passed 00:03:45.499 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:03:45.499 Test: test_get_ana_log_page_multi_ns_per_anagrp ...[2024-07-15 17:22:39.946828] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:03:45.499 [2024-07-15 17:22:39.946859] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:03:45.499 [2024-07-15 17:22:39.946889] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:03:45.499 [2024-07-15 17:22:39.946946] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:03:45.499 [2024-07-15 17:22:39.947039] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:03:45.499 [2024-07-15 17:22:39.947055] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:03:45.499 [2024-07-15 17:22:39.947065] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1659:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:03:45.499 [2024-07-15 17:22:39.947075] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1735:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:03:45.499 [2024-07-15 17:22:39.947166] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4238:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:03:45.499 [2024-07-15 17:22:39.947178] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4227:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:03:45.499 [2024-07-15 17:22:39.947189] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4245:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:03:45.499 passed 00:03:45.499 Test: test_multi_async_events ...passed 00:03:45.499 Test: test_rae ...passed 00:03:45.499 Test: test_nvmf_ctrlr_create_destruct ...passed 00:03:45.499 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:03:45.499 Test: test_spdk_nvmf_request_zcopy_start ...passed 00:03:45.499 Test: test_zcopy_read ...passed 00:03:45.499 Test: test_zcopy_write ...passed 00:03:45.499 Test: test_nvmf_property_set ...passed 00:03:45.499 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...passed 00:03:45.499 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...passed 00:03:45.499 Test: test_nvmf_ctrlr_ns_attachment ...passed 00:03:45.499 Test: test_nvmf_check_qpair_active ...passed 00:03:45.499 00:03:45.499 [2024-07-15 17:22:39.947272] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4731:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 before CONNECT 00:03:45.499 [2024-07-15 17:22:39.947287] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4757:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 in state 4 00:03:45.499 [2024-07-15 17:22:39.947319] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1946:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:03:45.499 [2024-07-15 17:22:39.947329] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1946:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:03:45.499 [2024-07-15 17:22:39.947347] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1969:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:03:45.499 [2024-07-15 17:22:39.947358] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1975:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:03:45.499 [2024-07-15 17:22:39.947369] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1987:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:03:45.499 [2024-07-15 17:22:39.947396] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4731:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before CONNECT 00:03:45.499 [2024-07-15 17:22:39.947407] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4745:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before authentication 00:03:45.499 [2024-07-15 17:22:39.947418] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4757:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 0 00:03:45.499 [2024-07-15 17:22:39.947429] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4757:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 4 00:03:45.499 [2024-07-15 17:22:39.947439] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4757:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 5 00:03:45.499 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.499 suites 1 1 n/a 0 0 00:03:45.499 tests 32 32 32 0 0 00:03:45.499 asserts 977 977 977 0 n/a 00:03:45.499 00:03:45.499 Elapsed time = 0.000 seconds 00:03:45.499 17:22:39 unittest.unittest_nvmf -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:03:45.499 00:03:45.499 00:03:45.499 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.499 http://cunit.sourceforge.net/ 00:03:45.499 00:03:45.499 00:03:45.499 Suite: nvmf 00:03:45.499 Test: test_get_rw_params ...passed 00:03:45.499 Test: test_get_rw_ext_params ...passed 00:03:45.499 Test: test_lba_in_range ...passed 00:03:45.499 Test: test_get_dif_ctx ...passed 00:03:45.499 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:03:45.499 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...passed 00:03:45.499 Test: test_nvmf_bdev_ctrlr_zcopy_start ...passed 00:03:45.499 Test: test_nvmf_bdev_ctrlr_cmd ...passed 00:03:45.499 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:03:45.499 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed 00:03:45.499 00:03:45.499 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.499 suites 1 1 n/a 0 0 00:03:45.499 tests 10 10 10 0 0 00:03:45.499 asserts 159 159 159 0 n/a 00:03:45.499 00:03:45.499 Elapsed time = 0.000 seconds 00:03:45.499 [2024-07-15 17:22:39.954327] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 447:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:03:45.499 [2024-07-15 17:22:39.954565] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 455:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:03:45.499 [2024-07-15 17:22:39.954589] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 463:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:03:45.499 [2024-07-15 17:22:39.954612] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 965:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:03:45.499 [2024-07-15 17:22:39.954628] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 973:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:03:45.499 [2024-07-15 17:22:39.954646] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 401:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:03:45.499 [2024-07-15 17:22:39.954667] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 409:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:03:45.499 [2024-07-15 17:22:39.954685] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 500:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:03:45.499 [2024-07-15 17:22:39.954700] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 507:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:03:45.499 17:22:39 unittest.unittest_nvmf -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:03:45.499 00:03:45.499 00:03:45.499 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.499 http://cunit.sourceforge.net/ 00:03:45.499 00:03:45.499 00:03:45.499 Suite: nvmf 00:03:45.499 Test: test_discovery_log ...passed 00:03:45.499 Test: test_discovery_log_with_filters ...passed 00:03:45.499 00:03:45.499 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.499 suites 1 1 n/a 0 0 00:03:45.499 tests 2 2 2 0 0 00:03:45.499 asserts 238 238 238 0 n/a 00:03:45.499 00:03:45.499 Elapsed time = 0.000 seconds 00:03:45.499 17:22:39 unittest.unittest_nvmf -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:03:45.499 00:03:45.499 00:03:45.499 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.499 http://cunit.sourceforge.net/ 00:03:45.499 00:03:45.499 00:03:45.499 Suite: nvmf 00:03:45.499 Test: nvmf_test_create_subsystem ...[2024-07-15 17:22:39.967218] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 126:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:03:45.499 [2024-07-15 17:22:39.967447] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:' is invalid 00:03:45.499 [2024-07-15 17:22:39.967476] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:03:45.499 [2024-07-15 17:22:39.967492] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub' is invalid 00:03:45.499 [2024-07-15 17:22:39.967507] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:03:45.499 [2024-07-15 17:22:39.967521] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.3spdk:sub' is invalid 00:03:45.499 [2024-07-15 17:22:39.967535] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:03:45.500 [2024-07-15 17:22:39.967548] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.-spdk:subsystem1' is invalid 00:03:45.500 [2024-07-15 17:22:39.967562] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 184:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:03:45.500 [2024-07-15 17:22:39.967576] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk-:subsystem1' is invalid 00:03:45.500 [2024-07-15 17:22:39.967590] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:03:45.500 [2024-07-15 17:22:39.967603] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io..spdk:subsystem1' is invalid 00:03:45.500 [2024-07-15 17:22:39.967625] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:03:45.500 [2024-07-15 17:22:39.967640] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' is invalid 00:03:45.500 [2024-07-15 17:22:39.967674] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:03:45.500 [2024-07-15 17:22:39.967688] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:�subsystem1' is invalid 00:03:45.500 [2024-07-15 17:22:39.967706] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:03:45.500 [2024-07-15 17:22:39.967720] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa' is invalid 00:03:45.500 [2024-07-15 17:22:39.967734] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:03:45.500 passed 00:03:45.500 Test: test_spdk_nvmf_subsystem_add_ns ...passed 00:03:45.500 Test: test_spdk_nvmf_subsystem_add_fdp_ns ...passed 00:03:45.500 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:03:45.500 Test: test_spdk_nvmf_ns_visible ...passed 00:03:45.500 Test: test_reservation_register ...passed 00:03:45.500 Test: test_reservation_register_with_ptpl ...passed 00:03:45.500 Test: test_reservation_acquire_preempt_1 ...passed 00:03:45.500 Test: test_reservation_acquire_release_with_ptpl ...[2024-07-15 17:22:39.967757] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2' is invalid 00:03:45.500 [2024-07-15 17:22:39.967772] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:03:45.500 [2024-07-15 17:22:39.967786] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2' is invalid 00:03:45.500 [2024-07-15 17:22:39.967858] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:03:45.500 [2024-07-15 17:22:39.967876] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2027:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:03:45.500 [2024-07-15 17:22:39.967906] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2158:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem with id: 0 can only add FDP namespace. 00:03:45.500 [2024-07-15 17:22:39.967945] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11 00:03:45.500 [2024-07-15 17:22:39.968042] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:45.500 [2024-07-15 17:22:39.968064] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3160:nvmf_ns_reservation_register: *ERROR*: No registrant 00:03:45.500 [2024-07-15 17:22:39.968319] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:45.500 passed 00:03:45.500 Test: test_reservation_release ...passed 00:03:45.500 Test: test_reservation_unregister_notification ...passed 00:03:45.500 Test: test_reservation_release_notification ...passed 00:03:45.500 Test: test_reservation_release_notification_write_exclusive ...passed 00:03:45.500 Test: test_reservation_clear_notification ...passed 00:03:45.500 Test: test_reservation_preempt_notification ...passed 00:03:45.500 Test: test_spdk_nvmf_ns_event ...passed 00:03:45.500 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:03:45.500 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:03:45.500 Test: test_spdk_nvmf_subsystem_add_host ...[2024-07-15 17:22:39.968522] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:45.500 [2024-07-15 17:22:39.968553] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:45.500 [2024-07-15 17:22:39.968577] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:45.500 [2024-07-15 17:22:39.968601] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:45.500 [2024-07-15 17:22:39.968625] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:45.500 [2024-07-15 17:22:39.968648] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:03:45.500 [2024-07-15 17:22:39.968757] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 265:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:03:45.500 passed 00:03:45.500 Test: test_nvmf_ns_reservation_report ...passed 00:03:45.500 Test: test_nvmf_nqn_is_valid ...passed 00:03:45.500 Test: test_nvmf_ns_reservation_restore ...passed 00:03:45.500 Test: test_nvmf_subsystem_state_change ...passed 00:03:45.500 Test: test_nvmf_reservation_custom_ops ...[2024-07-15 17:22:39.968814] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to transport_ut transport 00:03:45.500 [2024-07-15 17:22:39.968842] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3466:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:03:45.500 [2024-07-15 17:22:39.968872] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:03:45.500 [2024-07-15 17:22:39.968887] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:d6692fa1-42ce-11ef-96ac-773515fba64": uuid is not the correct length 00:03:45.500 [2024-07-15 17:22:39.968902] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:03:45.500 [2024-07-15 17:22:39.968941] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2659:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:03:45.500 passed 00:03:45.500 00:03:45.500 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.500 suites 1 1 n/a 0 0 00:03:45.500 tests 24 24 24 0 0 00:03:45.500 asserts 499 499 499 0 n/a 00:03:45.500 00:03:45.500 Elapsed time = 0.000 seconds 00:03:45.500 17:22:39 unittest.unittest_nvmf -- unit/unittest.sh@112 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:03:45.500 00:03:45.500 00:03:45.500 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.500 http://cunit.sourceforge.net/ 00:03:45.500 00:03:45.500 00:03:45.500 Suite: nvmf 00:03:45.500 Test: test_nvmf_tcp_create ...[2024-07-15 17:22:39.979421] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 745:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:03:45.500 passed 00:03:45.500 Test: test_nvmf_tcp_destroy ...passed 00:03:45.500 Test: test_nvmf_tcp_poll_group_create ...passed 00:03:45.500 Test: test_nvmf_tcp_send_c2h_data ...passed 00:03:45.500 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:03:45.500 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:03:45.500 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:03:45.500 Test: test_nvmf_tcp_send_c2h_term_req ...[2024-07-15 17:22:39.991882] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:45.500 [2024-07-15 17:22:39.991904] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209934a8 is same with the state(5) to be set 00:03:45.500 [2024-07-15 17:22:39.991915] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209934a8 is same with the state(5) to be set 00:03:45.500 [2024-07-15 17:22:39.991924] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:45.500 passed 00:03:45.500 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:03:45.500 Test: test_nvmf_tcp_icreq_handle ...[2024-07-15 17:22:39.991941] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209934a8 is same with the state(5) to be set 00:03:45.500 [2024-07-15 17:22:39.991969] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2122:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:03:45.500 [2024-07-15 17:22:39.991979] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:45.500 [2024-07-15 17:22:39.991987] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209933b0 is same with the state(5) to be set 00:03:45.500 passed 00:03:45.500 Test: test_nvmf_tcp_check_xfer_type ...passed 00:03:45.500 Test: test_nvmf_tcp_invalid_sgl ...passed 00:03:45.501 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-07-15 17:22:39.991995] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2122:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:03:45.501 [2024-07-15 17:22:39.992019] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209933b0 is same with the state(5) to be set 00:03:45.501 [2024-07-15 17:22:39.992035] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:45.501 [2024-07-15 17:22:39.992043] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209933b0 is same with the state(5) to be set 00:03:45.501 [2024-07-15 17:22:39.992059] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=0 00:03:45.501 [2024-07-15 17:22:39.992067] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209933b0 is same with the state(5) to be set 00:03:45.501 [2024-07-15 17:22:39.992082] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2518:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:03:45.501 [2024-07-15 17:22:39.992091] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:45.501 [2024-07-15 17:22:39.992099] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209933b0 is same with the state(5) to be set 00:03:45.501 [2024-07-15 17:22:39.992109] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2249:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x820992c38 00:03:45.501 [2024-07-15 17:22:39.992118] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:45.501 [2024-07-15 17:22:39.992126] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209934a8 is same with the state(5) to be set 00:03:45.501 [2024-07-15 17:22:39.992135] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2308:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x8209934a8 00:03:45.501 [2024-07-15 17:22:39.992142] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:45.501 [2024-07-15 17:22:39.992150] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209934a8 is same with the state(5) to be set 00:03:45.501 [2024-07-15 17:22:39.992159] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2259:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:03:45.501 [2024-07-15 17:22:39.992173] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:45.501 [2024-07-15 17:22:39.992181] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209934a8 is same with the state(5) to be set 00:03:45.501 [2024-07-15 17:22:39.992189] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2298:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:03:45.501 [2024-07-15 17:22:39.992197] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:45.501 [2024-07-15 17:22:39.992205] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209934a8 is same with the state(5) to be set 00:03:45.501 [2024-07-15 17:22:39.992214] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:45.501 [2024-07-15 17:22:39.992236] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209934a8 is same with the state(5) to be set 00:03:45.501 [2024-07-15 17:22:39.992246] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:45.501 [2024-07-15 17:22:39.992254] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209934a8 is same with the state(5) to be set 00:03:45.501 [2024-07-15 17:22:39.992270] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:45.501 [2024-07-15 17:22:39.992278] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209934a8 is same with the state(5) to be set 00:03:45.501 [2024-07-15 17:22:39.992287] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:45.501 [2024-07-15 17:22:39.992295] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209934a8 is same with the state(5) to be set 00:03:45.501 [2024-07-15 17:22:39.992303] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:45.501 [2024-07-15 17:22:39.992311] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209934a8 is same with the state(5) to be set 00:03:45.501 [2024-07-15 17:22:39.992319] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:03:45.501 passed 00:03:45.501 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-07-15 17:22:39.992327] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8209934a8 is same with the state(5) to be set 00:03:45.501 passed 00:03:45.501 Test: test_nvmf_tcp_tls_generate_psk_id ...passed 00:03:45.501 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-07-15 17:22:39.997703] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:03:45.501 [2024-07-15 17:22:39.997724] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:03:45.501 passed 00:03:45.501 Test: test_nvmf_tcp_tls_generate_tls_psk ...passed 00:03:45.501 00:03:45.501 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.501 suites 1 1 n/a 0 0 00:03:45.501 tests 17 17 17 0 0 00:03:45.501 asserts 222 222 222 0 n/a 00:03:45.501 00:03:45.501 Elapsed time = 0.016 seconds 00:03:45.501 [2024-07-15 17:22:39.997844] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:03:45.501 [2024-07-15 17:22:39.997858] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:03:45.501 [2024-07-15 17:22:39.997924] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:03:45.501 [2024-07-15 17:22:39.997934] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:03:45.501 17:22:40 unittest.unittest_nvmf -- unit/unittest.sh@113 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:03:45.501 00:03:45.501 00:03:45.501 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.501 http://cunit.sourceforge.net/ 00:03:45.501 00:03:45.501 00:03:45.501 Suite: nvmf 00:03:45.501 Test: test_nvmf_tgt_create_poll_group ...passed 00:03:45.501 00:03:45.501 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.501 suites 1 1 n/a 0 0 00:03:45.501 tests 1 1 1 0 0 00:03:45.501 asserts 17 17 17 0 n/a 00:03:45.501 00:03:45.501 Elapsed time = 0.000 seconds 00:03:45.501 00:03:45.501 real 0m0.067s 00:03:45.501 user 0m0.012s 00:03:45.501 sys 0m0.060s 00:03:45.501 17:22:40 unittest.unittest_nvmf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.501 17:22:40 unittest.unittest_nvmf -- common/autotest_common.sh@10 -- # set +x 00:03:45.501 ************************************ 00:03:45.501 END TEST unittest_nvmf 00:03:45.501 ************************************ 00:03:45.501 17:22:40 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:45.501 17:22:40 unittest -- unit/unittest.sh@262 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:45.501 17:22:40 unittest -- unit/unittest.sh@267 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:45.501 17:22:40 unittest -- unit/unittest.sh@268 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:03:45.501 17:22:40 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.501 17:22:40 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.501 17:22:40 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:45.501 ************************************ 00:03:45.501 START TEST unittest_nvmf_rdma 00:03:45.501 ************************************ 00:03:45.501 17:22:40 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:03:45.501 00:03:45.501 00:03:45.501 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.501 http://cunit.sourceforge.net/ 00:03:45.501 00:03:45.501 00:03:45.501 Suite: nvmf 00:03:45.501 Test: test_spdk_nvmf_rdma_request_parse_sgl ...passed 00:03:45.501 Test: test_spdk_nvmf_rdma_request_process ...passed 00:03:45.501 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:03:45.501 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...[2024-07-15 17:22:40.056117] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1864:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:03:45.501 [2024-07-15 17:22:40.056336] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1914:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:03:45.501 [2024-07-15 17:22:40.056350] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1914:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:03:45.501 passed 00:03:45.501 Test: test_nvmf_rdma_opts_init ...passed 00:03:45.501 Test: test_nvmf_rdma_request_free_data ...passed 00:03:45.501 Test: test_nvmf_rdma_resources_create ...passed 00:03:45.501 Test: test_nvmf_rdma_qpair_compare ...passed 00:03:45.501 Test: test_nvmf_rdma_resize_cq ...[2024-07-15 17:22:40.057024] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 955:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:03:45.501 Using CQ of insufficient size may lead to CQ overrun 00:03:45.501 passed 00:03:45.501 00:03:45.501 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.501 suites 1 1 n/a 0 0 00:03:45.501 tests 9 9 9 0 0 00:03:45.501 asserts 579 579 579 0 n/a 00:03:45.501 00:03:45.501 Elapsed time = 0.000 seconds 00:03:45.501 [2024-07-15 17:22:40.057047] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 960:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:03:45.501 [2024-07-15 17:22:40.057104] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 967:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 0: No error: 0 00:03:45.501 00:03:45.501 real 0m0.007s 00:03:45.501 user 0m0.000s 00:03:45.501 sys 0m0.008s 00:03:45.501 17:22:40 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.501 17:22:40 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:03:45.501 ************************************ 00:03:45.501 END TEST unittest_nvmf_rdma 00:03:45.501 ************************************ 00:03:45.501 17:22:40 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:45.501 17:22:40 unittest -- unit/unittest.sh@271 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:45.501 17:22:40 unittest -- unit/unittest.sh@275 -- # run_test unittest_scsi unittest_scsi 00:03:45.501 17:22:40 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.501 17:22:40 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.501 17:22:40 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:45.501 ************************************ 00:03:45.501 START TEST unittest_scsi 00:03:45.501 ************************************ 00:03:45.501 17:22:40 unittest.unittest_scsi -- common/autotest_common.sh@1123 -- # unittest_scsi 00:03:45.502 17:22:40 unittest.unittest_scsi -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:03:45.502 00:03:45.502 00:03:45.502 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.502 http://cunit.sourceforge.net/ 00:03:45.502 00:03:45.502 00:03:45.502 Suite: dev_suite 00:03:45.502 Test: dev_destruct_null_dev ...passed 00:03:45.502 Test: dev_destruct_zero_luns ...passed 00:03:45.502 Test: dev_destruct_null_lun ...passed 00:03:45.502 Test: dev_destruct_success ...passed 00:03:45.502 Test: dev_construct_num_luns_zero ...passed 00:03:45.502 Test: dev_construct_no_lun_zero ...passed 00:03:45.502 Test: dev_construct_null_lun ...passed 00:03:45.502 Test: dev_construct_name_too_long ...passed 00:03:45.502 Test: dev_construct_success ...passed 00:03:45.502 Test: dev_construct_success_lun_zero_not_first ...[2024-07-15 17:22:40.097241] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:03:45.502 [2024-07-15 17:22:40.097420] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:03:45.502 [2024-07-15 17:22:40.097442] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 248:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:03:45.502 [2024-07-15 17:22:40.097463] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 223:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:03:45.502 passed 00:03:45.502 Test: dev_queue_mgmt_task_success ...passed 00:03:45.502 Test: dev_queue_task_success ...passed 00:03:45.502 Test: dev_stop_success ...passed 00:03:45.502 Test: dev_add_port_max_ports ...passed 00:03:45.502 Test: dev_add_port_construct_failure1 ...passed 00:03:45.502 Test: dev_add_port_construct_failure2 ...passed 00:03:45.502 Test: dev_add_port_success1 ...passed 00:03:45.502 Test: dev_add_port_success2 ...passed 00:03:45.502 Test: dev_add_port_success3 ...passed 00:03:45.502 Test: dev_find_port_by_id_num_ports_zero ...passed 00:03:45.502 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:03:45.502 Test: dev_find_port_by_id_success ...passed 00:03:45.502 Test: dev_add_lun_bdev_not_found ...passed 00:03:45.502 Test: dev_add_lun_no_free_lun_id ...[2024-07-15 17:22:40.097536] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:03:45.502 [2024-07-15 17:22:40.097561] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:03:45.502 [2024-07-15 17:22:40.097582] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:03:45.502 passed 00:03:45.502 Test: dev_add_lun_success1 ...passed 00:03:45.502 Test: dev_add_lun_success2 ...passed 00:03:45.502 Test: dev_check_pending_tasks ...passed 00:03:45.502 Test: dev_iterate_luns ...passed[2024-07-15 17:22:40.097823] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:03:45.502 00:03:45.502 Test: dev_find_free_lun ...passed 00:03:45.502 00:03:45.502 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.502 suites 1 1 n/a 0 0 00:03:45.502 tests 29 29 29 0 0 00:03:45.502 asserts 97 97 97 0 n/a 00:03:45.502 00:03:45.502 Elapsed time = 0.000 seconds 00:03:45.502 17:22:40 unittest.unittest_scsi -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:03:45.502 00:03:45.502 00:03:45.502 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.502 http://cunit.sourceforge.net/ 00:03:45.502 00:03:45.502 00:03:45.502 Suite: lun_suite 00:03:45.502 Test: lun_task_mgmt_execute_abort_task_not_supported ...passed 00:03:45.502 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...[2024-07-15 17:22:40.103830] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:03:45.502 passed 00:03:45.502 Test: lun_task_mgmt_execute_lun_reset ...passed 00:03:45.502 Test: lun_task_mgmt_execute_target_reset ...passed 00:03:45.502 Test: lun_task_mgmt_execute_invalid_case ...[2024-07-15 17:22:40.104023] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:03:45.502 passed 00:03:45.502 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:03:45.502 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:03:45.502 Test: lun_append_task_null_lun_not_supported ...passed 00:03:45.502 Test: lun_execute_scsi_task_pending ...passed 00:03:45.502 Test: lun_execute_scsi_task_complete ...passed 00:03:45.502 Test: lun_execute_scsi_task_resize ...passed 00:03:45.502 Test: lun_destruct_success ...passed 00:03:45.502 Test: lun_construct_null_ctx ...passed 00:03:45.502 Test: lun_construct_success ...passed 00:03:45.502 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:03:45.502 Test: lun_reset_task_suspend_scsi_task ...passed 00:03:45.502 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:03:45.502 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...[2024-07-15 17:22:40.104069] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:03:45.502 [2024-07-15 17:22:40.104133] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:03:45.502 passed 00:03:45.502 00:03:45.502 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.502 suites 1 1 n/a 0 0 00:03:45.502 tests 18 18 18 0 0 00:03:45.502 asserts 153 153 153 0 n/a 00:03:45.502 00:03:45.502 Elapsed time = 0.000 seconds 00:03:45.502 17:22:40 unittest.unittest_scsi -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:03:45.502 00:03:45.502 00:03:45.502 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.502 http://cunit.sourceforge.net/ 00:03:45.502 00:03:45.502 00:03:45.502 Suite: scsi_suite 00:03:45.502 Test: scsi_init ...passed 00:03:45.502 00:03:45.502 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.502 suites 1 1 n/a 0 0 00:03:45.502 tests 1 1 1 0 0 00:03:45.502 asserts 1 1 1 0 n/a 00:03:45.502 00:03:45.502 Elapsed time = 0.000 seconds 00:03:45.502 17:22:40 unittest.unittest_scsi -- unit/unittest.sh@120 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:03:45.502 00:03:45.502 00:03:45.502 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.502 http://cunit.sourceforge.net/ 00:03:45.502 00:03:45.502 00:03:45.502 Suite: translation_suite 00:03:45.502 Test: mode_select_6_test ...passed 00:03:45.502 Test: mode_select_6_test2 ...passed 00:03:45.502 Test: mode_sense_6_test ...passed 00:03:45.502 Test: mode_sense_10_test ...passed 00:03:45.502 Test: inquiry_evpd_test ...passed 00:03:45.502 Test: inquiry_standard_test ...passed 00:03:45.502 Test: inquiry_overflow_test ...passed 00:03:45.502 Test: task_complete_test ...passed 00:03:45.502 Test: lba_range_test ...passed 00:03:45.502 Test: xfer_len_test ...[2024-07-15 17:22:40.114190] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1271:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:03:45.502 passed 00:03:45.502 Test: xfer_test ...passed 00:03:45.502 Test: scsi_name_padding_test ...passed 00:03:45.502 Test: get_dif_ctx_test ...passed 00:03:45.502 Test: unmap_split_test ...passed 00:03:45.502 00:03:45.502 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.502 suites 1 1 n/a 0 0 00:03:45.502 tests 14 14 14 0 0 00:03:45.502 asserts 1205 1205 1205 0 n/a 00:03:45.502 00:03:45.502 Elapsed time = 0.000 seconds 00:03:45.502 17:22:40 unittest.unittest_scsi -- unit/unittest.sh@121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:03:45.502 00:03:45.502 00:03:45.502 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.502 http://cunit.sourceforge.net/ 00:03:45.502 00:03:45.502 00:03:45.502 Suite: reservation_suite 00:03:45.502 Test: test_reservation_register ...[2024-07-15 17:22:40.120190] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:45.502 passed 00:03:45.502 Test: test_reservation_reserve ...[2024-07-15 17:22:40.120467] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:45.502 [2024-07-15 17:22:40.120491] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 215:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:03:45.502 [2024-07-15 17:22:40.120507] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 210:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:03:45.502 passed 00:03:45.502 Test: test_all_registrant_reservation_reserve ...[2024-07-15 17:22:40.120528] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:45.502 passed 00:03:45.502 Test: test_all_registrant_reservation_access ...[2024-07-15 17:22:40.120553] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:45.502 [2024-07-15 17:22:40.120574] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 866:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0x8 00:03:45.502 [2024-07-15 17:22:40.120588] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 866:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0xaa 00:03:45.502 passed 00:03:45.502 Test: test_reservation_preempt_non_all_regs ...[2024-07-15 17:22:40.120607] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:45.502 passed 00:03:45.502 Test: test_reservation_preempt_all_regs ...passed 00:03:45.502 Test: test_reservation_cmds_conflict ...[2024-07-15 17:22:40.120622] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 464:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:03:45.502 [2024-07-15 17:22:40.120652] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:45.502 [2024-07-15 17:22:40.120672] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:45.502 [2024-07-15 17:22:40.120685] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 858:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:03:45.502 [2024-07-15 17:22:40.120697] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:03:45.502 [2024-07-15 17:22:40.120708] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:03:45.502 [2024-07-15 17:22:40.120718] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:03:45.502 [2024-07-15 17:22:40.120729] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:03:45.502 passed 00:03:45.502 Test: test_scsi2_reserve_release ...passed 00:03:45.502 Test: test_pr_with_scsi2_reserve_release ...[2024-07-15 17:22:40.120753] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:03:45.502 passed 00:03:45.502 00:03:45.503 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.503 suites 1 1 n/a 0 0 00:03:45.503 tests 9 9 9 0 0 00:03:45.503 asserts 344 344 344 0 n/a 00:03:45.503 00:03:45.503 Elapsed time = 0.000 seconds 00:03:45.503 00:03:45.503 real 0m0.028s 00:03:45.503 user 0m0.002s 00:03:45.503 sys 0m0.026s 00:03:45.503 17:22:40 unittest.unittest_scsi -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.503 17:22:40 unittest.unittest_scsi -- common/autotest_common.sh@10 -- # set +x 00:03:45.503 ************************************ 00:03:45.503 END TEST unittest_scsi 00:03:45.503 ************************************ 00:03:45.503 17:22:40 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:45.503 17:22:40 unittest -- unit/unittest.sh@278 -- # uname -s 00:03:45.503 17:22:40 unittest -- unit/unittest.sh@278 -- # '[' FreeBSD = Linux ']' 00:03:45.503 17:22:40 unittest -- unit/unittest.sh@281 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:03:45.503 17:22:40 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.503 17:22:40 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.503 17:22:40 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:45.503 ************************************ 00:03:45.503 START TEST unittest_thread 00:03:45.503 ************************************ 00:03:45.503 17:22:40 unittest.unittest_thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:03:45.503 00:03:45.503 00:03:45.503 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.503 http://cunit.sourceforge.net/ 00:03:45.503 00:03:45.503 00:03:45.503 Suite: io_channel 00:03:45.503 Test: thread_alloc ...passed 00:03:45.503 Test: thread_send_msg ...passed 00:03:45.503 Test: thread_poller ...passed 00:03:45.503 Test: poller_pause ...passed 00:03:45.503 Test: thread_for_each ...passed 00:03:45.503 Test: for_each_channel_remove ...passed 00:03:45.503 Test: for_each_channel_unreg ...[2024-07-15 17:22:40.169475] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2178:spdk_io_device_register: *ERROR*: io_device 0x8210980f4 already registered (old:0x10f384e67000 new:0x10f384e67180) 00:03:45.503 passed 00:03:45.503 Test: thread_name ...passed 00:03:45.503 Test: channel ...passed 00:03:45.503 Test: channel_destroy_races ...[2024-07-15 17:22:40.170128] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2311:spdk_get_io_channel: *ERROR*: could not find io_device 0x228838 00:03:45.503 passed 00:03:45.503 Test: thread_exit_test ...[2024-07-15 17:22:40.170786] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 640:thread_exit: *ERROR*: thread 0x10f384e2ca80 got timeout, and move it to the exited state forcefully 00:03:45.503 passed 00:03:45.503 Test: thread_update_stats_test ...passed 00:03:45.503 Test: nested_channel ...passed 00:03:45.503 Test: device_unregister_and_thread_exit_race ...passed 00:03:45.503 Test: cache_closest_timed_poller ...passed 00:03:45.503 Test: multi_timed_pollers_have_same_expiration ...passed 00:03:45.503 Test: io_device_lookup ...passed 00:03:45.503 Test: spdk_spin ...[2024-07-15 17:22:40.172407] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:03:45.503 [2024-07-15 17:22:40.172441] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x8210980f0 00:03:45.503 [2024-07-15 17:22:40.172464] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3120:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:03:45.503 [2024-07-15 17:22:40.172715] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:03:45.503 [2024-07-15 17:22:40.172729] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x8210980f0 00:03:45.503 [2024-07-15 17:22:40.172739] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3103:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:03:45.503 [2024-07-15 17:22:40.172751] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x8210980f0 00:03:45.503 [2024-07-15 17:22:40.172771] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3103:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:03:45.503 [2024-07-15 17:22:40.172789] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x8210980f0 00:03:45.503 [2024-07-15 17:22:40.172811] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3064:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:03:45.503 [2024-07-15 17:22:40.172830] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x8210980f0 00:03:45.503 passed 00:03:45.503 Test: for_each_channel_and_thread_exit_race ...passed 00:03:45.503 Test: for_each_thread_and_thread_exit_race ...passed 00:03:45.503 00:03:45.503 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.503 suites 1 1 n/a 0 0 00:03:45.503 tests 20 20 20 0 0 00:03:45.503 asserts 409 409 409 0 n/a 00:03:45.503 00:03:45.503 Elapsed time = 0.008 seconds 00:03:45.503 00:03:45.503 real 0m0.012s 00:03:45.503 user 0m0.003s 00:03:45.503 sys 0m0.010s 00:03:45.503 17:22:40 unittest.unittest_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.503 17:22:40 unittest.unittest_thread -- common/autotest_common.sh@10 -- # set +x 00:03:45.503 ************************************ 00:03:45.503 END TEST unittest_thread 00:03:45.503 ************************************ 00:03:45.503 17:22:40 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:45.503 17:22:40 unittest -- unit/unittest.sh@282 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:03:45.503 17:22:40 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.503 17:22:40 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.503 17:22:40 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:45.503 ************************************ 00:03:45.503 START TEST unittest_iobuf 00:03:45.503 ************************************ 00:03:45.503 17:22:40 unittest.unittest_iobuf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:03:45.503 00:03:45.503 00:03:45.503 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.503 http://cunit.sourceforge.net/ 00:03:45.503 00:03:45.503 00:03:45.503 Suite: io_channel 00:03:45.503 Test: iobuf ...passed 00:03:45.503 Test: iobuf_cache ...[2024-07-15 17:22:40.229384] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 362:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:03:45.503 [2024-07-15 17:22:40.229628] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 364:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:03:45.503 [2024-07-15 17:22:40.229665] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 374:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:03:45.503 [2024-07-15 17:22:40.229681] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 376:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:03:45.503 [2024-07-15 17:22:40.229696] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 362:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:03:45.503 [2024-07-15 17:22:40.229708] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 364:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:03:45.503 passed 00:03:45.503 00:03:45.503 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.503 suites 1 1 n/a 0 0 00:03:45.503 tests 2 2 2 0 0 00:03:45.503 asserts 107 107 107 0 n/a 00:03:45.503 00:03:45.503 Elapsed time = 0.000 seconds 00:03:45.503 00:03:45.503 real 0m0.005s 00:03:45.503 user 0m0.004s 00:03:45.503 sys 0m0.000s 00:03:45.503 17:22:40 unittest.unittest_iobuf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.503 ************************************ 00:03:45.503 END TEST unittest_iobuf 00:03:45.503 ************************************ 00:03:45.503 17:22:40 unittest.unittest_iobuf -- common/autotest_common.sh@10 -- # set +x 00:03:45.503 17:22:40 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:45.503 17:22:40 unittest -- unit/unittest.sh@283 -- # run_test unittest_util unittest_util 00:03:45.503 17:22:40 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.503 17:22:40 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.503 17:22:40 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:45.503 ************************************ 00:03:45.503 START TEST unittest_util 00:03:45.503 ************************************ 00:03:45.503 17:22:40 unittest.unittest_util -- common/autotest_common.sh@1123 -- # unittest_util 00:03:45.503 17:22:40 unittest.unittest_util -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:03:45.503 00:03:45.503 00:03:45.503 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.503 http://cunit.sourceforge.net/ 00:03:45.503 00:03:45.503 00:03:45.503 Suite: base64 00:03:45.503 Test: test_base64_get_encoded_strlen ...passed 00:03:45.503 Test: test_base64_get_decoded_len ...passed 00:03:45.503 Test: test_base64_encode ...passed 00:03:45.503 Test: test_base64_decode ...passed 00:03:45.503 Test: test_base64_urlsafe_encode ...passed 00:03:45.503 Test: test_base64_urlsafe_decode ...passed 00:03:45.503 00:03:45.503 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.503 suites 1 1 n/a 0 0 00:03:45.503 tests 6 6 6 0 0 00:03:45.503 asserts 112 112 112 0 n/a 00:03:45.503 00:03:45.503 Elapsed time = 0.000 seconds 00:03:45.503 17:22:40 unittest.unittest_util -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:03:45.503 00:03:45.503 00:03:45.503 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.503 http://cunit.sourceforge.net/ 00:03:45.503 00:03:45.503 00:03:45.503 Suite: bit_array 00:03:45.503 Test: test_1bit ...passed 00:03:45.503 Test: test_64bit ...passed 00:03:45.503 Test: test_find ...passed 00:03:45.503 Test: test_resize ...passed 00:03:45.503 Test: test_errors ...passed 00:03:45.503 Test: test_count ...passed 00:03:45.503 Test: test_mask_store_load ...passed 00:03:45.503 Test: test_mask_clear ...passed 00:03:45.503 00:03:45.503 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.503 suites 1 1 n/a 0 0 00:03:45.503 tests 8 8 8 0 0 00:03:45.503 asserts 5075 5075 5075 0 n/a 00:03:45.503 00:03:45.503 Elapsed time = 0.000 seconds 00:03:45.503 17:22:40 unittest.unittest_util -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:03:45.503 00:03:45.503 00:03:45.503 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.503 http://cunit.sourceforge.net/ 00:03:45.503 00:03:45.503 00:03:45.504 Suite: cpuset 00:03:45.504 Test: test_cpuset ...passed 00:03:45.504 Test: test_cpuset_parse ...[2024-07-15 17:22:40.282821] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 256:parse_list: *ERROR*: Unexpected end of core list '[' 00:03:45.504 [2024-07-15 17:22:40.283007] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:03:45.504 [2024-07-15 17:22:40.283025] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:03:45.504 [2024-07-15 17:22:40.283036] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 237:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:03:45.504 [2024-07-15 17:22:40.283047] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:03:45.504 [2024-07-15 17:22:40.283057] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:03:45.504 passed 00:03:45.504 Test: test_cpuset_fmt ...passed 00:03:45.504 Test: test_cpuset_foreach ...passed 00:03:45.504 00:03:45.504 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.504 suites 1 1 n/a 0 0 00:03:45.504 tests 4 4 4 0 0 00:03:45.504 asserts 90 90 90 0 n/a 00:03:45.504 00:03:45.504 Elapsed time = 0.000 seconds 00:03:45.504 [2024-07-15 17:22:40.283067] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 220:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:03:45.504 [2024-07-15 17:22:40.283078] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 215:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:03:45.504 17:22:40 unittest.unittest_util -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:03:45.504 00:03:45.504 00:03:45.504 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.504 http://cunit.sourceforge.net/ 00:03:45.504 00:03:45.504 00:03:45.504 Suite: crc16 00:03:45.504 Test: test_crc16_t10dif ...passed 00:03:45.504 Test: test_crc16_t10dif_seed ...passed 00:03:45.504 Test: test_crc16_t10dif_copy ...passed 00:03:45.504 00:03:45.504 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.504 suites 1 1 n/a 0 0 00:03:45.504 tests 3 3 3 0 0 00:03:45.504 asserts 5 5 5 0 n/a 00:03:45.504 00:03:45.504 Elapsed time = 0.000 seconds 00:03:45.504 17:22:40 unittest.unittest_util -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:03:45.504 00:03:45.504 00:03:45.504 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.504 http://cunit.sourceforge.net/ 00:03:45.504 00:03:45.504 00:03:45.504 Suite: crc32_ieee 00:03:45.504 Test: test_crc32_ieee ...passed 00:03:45.504 00:03:45.504 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.504 suites 1 1 n/a 0 0 00:03:45.504 tests 1 1 1 0 0 00:03:45.504 asserts 1 1 1 0 n/a 00:03:45.504 00:03:45.504 Elapsed time = 0.000 seconds 00:03:45.504 17:22:40 unittest.unittest_util -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:03:45.504 00:03:45.504 00:03:45.504 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.504 http://cunit.sourceforge.net/ 00:03:45.504 00:03:45.504 00:03:45.504 Suite: crc32c 00:03:45.504 Test: test_crc32c ...passed 00:03:45.504 Test: test_crc32c_nvme ...passed 00:03:45.504 00:03:45.504 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.504 suites 1 1 n/a 0 0 00:03:45.504 tests 2 2 2 0 0 00:03:45.504 asserts 16 16 16 0 n/a 00:03:45.504 00:03:45.504 Elapsed time = 0.000 seconds 00:03:45.504 17:22:40 unittest.unittest_util -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:03:45.504 00:03:45.504 00:03:45.504 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.504 http://cunit.sourceforge.net/ 00:03:45.504 00:03:45.504 00:03:45.504 Suite: crc64 00:03:45.504 Test: test_crc64_nvme ...passed 00:03:45.504 00:03:45.504 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.504 suites 1 1 n/a 0 0 00:03:45.504 tests 1 1 1 0 0 00:03:45.504 asserts 4 4 4 0 n/a 00:03:45.504 00:03:45.504 Elapsed time = 0.000 seconds 00:03:45.504 17:22:40 unittest.unittest_util -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:03:45.504 00:03:45.504 00:03:45.504 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.504 http://cunit.sourceforge.net/ 00:03:45.504 00:03:45.504 00:03:45.504 Suite: string 00:03:45.504 Test: test_parse_ip_addr ...passed 00:03:45.504 Test: test_str_chomp ...passed 00:03:45.504 Test: test_parse_capacity ...passed 00:03:45.504 Test: test_sprintf_append_realloc ...passed 00:03:45.504 Test: test_strtol ...passed 00:03:45.504 Test: test_strtoll ...passed 00:03:45.504 Test: test_strarray ...passed 00:03:45.504 Test: test_strcpy_replace ...passed 00:03:45.504 00:03:45.504 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.504 suites 1 1 n/a 0 0 00:03:45.504 tests 8 8 8 0 0 00:03:45.504 asserts 161 161 161 0 n/a 00:03:45.504 00:03:45.504 Elapsed time = 0.000 seconds 00:03:45.504 17:22:40 unittest.unittest_util -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:03:45.504 00:03:45.504 00:03:45.504 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.504 http://cunit.sourceforge.net/ 00:03:45.504 00:03:45.504 00:03:45.504 Suite: dif 00:03:45.504 Test: dif_generate_and_verify_test ...[2024-07-15 17:22:40.317517] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:03:45.504 [2024-07-15 17:22:40.317800] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:03:45.504 [2024-07-15 17:22:40.317857] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:03:45.504 [2024-07-15 17:22:40.317907] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:03:45.504 [2024-07-15 17:22:40.318293] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:03:45.504 passed 00:03:45.504 Test: dif_disable_check_test ...[2024-07-15 17:22:40.318394] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:03:45.504 [2024-07-15 17:22:40.318647] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:03:45.504 [2024-07-15 17:22:40.318719] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:03:45.504 passed 00:03:45.504 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-07-15 17:22:40.318804] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:03:45.504 [2024-07-15 17:22:40.319046] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:03:45.504 [2024-07-15 17:22:40.319118] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:03:45.504 [2024-07-15 17:22:40.319199] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:03:45.504 [2024-07-15 17:22:40.319269] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:03:45.504 [2024-07-15 17:22:40.319339] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:03:45.504 [2024-07-15 17:22:40.319409] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:03:45.504 [2024-07-15 17:22:40.319478] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:03:45.504 [2024-07-15 17:22:40.319546] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:03:45.504 passed 00:03:45.504 Test: dif_apptag_mask_test ...[2024-07-15 17:22:40.319616] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:03:45.504 [2024-07-15 17:22:40.319686] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:03:45.504 [2024-07-15 17:22:40.319756] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:03:45.504 [2024-07-15 17:22:40.319829] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:03:45.504 passed 00:03:45.504 Test: dif_sec_512_md_0_error_test ...passed 00:03:45.504 Test: dif_sec_4096_md_0_error_test ...[2024-07-15 17:22:40.319900] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:03:45.504 [2024-07-15 17:22:40.319948] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:45.505 [2024-07-15 17:22:40.319966] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:45.505 passed 00:03:45.505 Test: dif_sec_4100_md_128_error_test ...passed 00:03:45.505 Test: dif_guard_seed_test ...passed 00:03:45.505 Test: dif_guard_value_test ...passed 00:03:45.505 Test: dif_disable_sec_512_md_8_single_iov_test ...passed 00:03:45.505 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:03:45.505 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...[2024-07-15 17:22:40.319982] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:45.505 [2024-07-15 17:22:40.320012] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:03:45.505 [2024-07-15 17:22:40.320027] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:03:45.505 passed 00:03:45.505 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:03:45.505 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:03:45.505 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:03:45.505 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:03:45.505 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:03:45.505 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:03:45.505 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:03:45.505 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:03:45.505 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:03:45.505 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:03:45.505 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:03:45.505 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:03:45.505 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:03:45.505 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:03:45.505 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:03:45.505 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-15 17:22:40.327607] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=f94c, Actual=fd4c 00:03:45.505 [2024-07-15 17:22:40.327923] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=fa21, Actual=fe21 00:03:45.505 [2024-07-15 17:22:40.328254] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=488 00:03:45.505 [2024-07-15 17:22:40.328580] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=488 00:03:45.505 [2024-07-15 17:22:40.328917] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=4000060 00:03:45.505 [2024-07-15 17:22:40.329236] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=4000060 00:03:45.505 [2024-07-15 17:22:40.329553] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=fd4c, Actual=648 00:03:45.505 [2024-07-15 17:22:40.329852] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=fe21, Actual=31ef 00:03:45.505 [2024-07-15 17:22:40.330149] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=1eb753ed, Actual=1ab753ed 00:03:45.505 [2024-07-15 17:22:40.330466] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=3c574660, Actual=38574660 00:03:45.505 [2024-07-15 17:22:40.330783] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=488 00:03:45.505 [2024-07-15 17:22:40.331098] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=488 00:03:45.505 [2024-07-15 17:22:40.331411] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=4000060 00:03:45.505 [2024-07-15 17:22:40.331728] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=4000060 00:03:45.505 [2024-07-15 17:22:40.332043] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=1ab753ed, Actual=c5e5b66 00:03:45.505 [2024-07-15 17:22:40.332348] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=38574660, Actual=6037886b 00:03:45.505 [2024-07-15 17:22:40.332645] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=a576a7728acc20d3, Actual=a576a7728ecc20d3 00:03:45.505 [2024-07-15 17:22:40.332965] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=88010a2d4c37a266, Actual=88010a2d4837a266 00:03:45.505 [2024-07-15 17:22:40.333277] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=488 00:03:45.505 [2024-07-15 17:22:40.333591] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=488 00:03:45.505 [2024-07-15 17:22:40.333904] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=460 00:03:45.505 [2024-07-15 17:22:40.334217] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=460 00:03:45.505 [2024-07-15 17:22:40.334530] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=a576a7728ecc20d3, Actual=ca7ef06b19f5984 00:03:45.505 [2024-07-15 17:22:40.334827] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=88010a2d4837a266, Actual=eeb8f15464c1785a 00:03:45.505 passed 00:03:45.505 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-07-15 17:22:40.334987] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:03:45.505 [2024-07-15 17:22:40.335029] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:03:45.505 [2024-07-15 17:22:40.335071] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.505 [2024-07-15 17:22:40.335112] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.505 [2024-07-15 17:22:40.335153] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:03:45.505 [2024-07-15 17:22:40.335194] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:03:45.505 [2024-07-15 17:22:40.335235] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=648 00:03:45.505 [2024-07-15 17:22:40.335273] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=31ef 00:03:45.505 [2024-07-15 17:22:40.335312] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1eb753ed, Actual=1ab753ed 00:03:45.505 [2024-07-15 17:22:40.335352] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3c574660, Actual=38574660 00:03:45.505 [2024-07-15 17:22:40.335393] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.505 [2024-07-15 17:22:40.335434] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.505 [2024-07-15 17:22:40.335476] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:03:45.505 [2024-07-15 17:22:40.335517] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:03:45.505 [2024-07-15 17:22:40.335557] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=c5e5b66 00:03:45.505 [2024-07-15 17:22:40.335595] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=6037886b 00:03:45.505 [2024-07-15 17:22:40.335633] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728acc20d3, Actual=a576a7728ecc20d3 00:03:45.505 [2024-07-15 17:22:40.335674] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4c37a266, Actual=88010a2d4837a266 00:03:45.505 [2024-07-15 17:22:40.335714] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.505 [2024-07-15 17:22:40.335755] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.505 [2024-07-15 17:22:40.335795] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:03:45.505 [2024-07-15 17:22:40.335836] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:03:45.505 passed 00:03:45.505 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-07-15 17:22:40.335877] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=ca7ef06b19f5984 00:03:45.505 [2024-07-15 17:22:40.335915] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=eeb8f15464c1785a 00:03:45.505 [2024-07-15 17:22:40.335956] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:03:45.505 [2024-07-15 17:22:40.335996] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:03:45.505 [2024-07-15 17:22:40.336037] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.505 [2024-07-15 17:22:40.336078] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.505 [2024-07-15 17:22:40.336118] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:03:45.505 [2024-07-15 17:22:40.336159] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:03:45.505 [2024-07-15 17:22:40.336207] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=648 00:03:45.505 [2024-07-15 17:22:40.336255] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=31ef 00:03:45.505 [2024-07-15 17:22:40.336295] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1eb753ed, Actual=1ab753ed 00:03:45.505 [2024-07-15 17:22:40.336335] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3c574660, Actual=38574660 00:03:45.505 [2024-07-15 17:22:40.336376] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.505 [2024-07-15 17:22:40.336416] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.505 [2024-07-15 17:22:40.336457] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:03:45.505 [2024-07-15 17:22:40.336497] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:03:45.505 [2024-07-15 17:22:40.336537] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=c5e5b66 00:03:45.505 [2024-07-15 17:22:40.336575] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=6037886b 00:03:45.505 [2024-07-15 17:22:40.336613] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728acc20d3, Actual=a576a7728ecc20d3 00:03:45.505 [2024-07-15 17:22:40.336654] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4c37a266, Actual=88010a2d4837a266 00:03:45.506 [2024-07-15 17:22:40.336694] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.506 [2024-07-15 17:22:40.336735] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.506 [2024-07-15 17:22:40.336775] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:03:45.506 [2024-07-15 17:22:40.336816] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:03:45.506 passed 00:03:45.506 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-07-15 17:22:40.336856] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=ca7ef06b19f5984 00:03:45.506 [2024-07-15 17:22:40.336894] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=eeb8f15464c1785a 00:03:45.506 [2024-07-15 17:22:40.336935] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:03:45.506 [2024-07-15 17:22:40.336976] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:03:45.506 [2024-07-15 17:22:40.337016] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.506 [2024-07-15 17:22:40.337057] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.506 [2024-07-15 17:22:40.337097] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:03:45.506 [2024-07-15 17:22:40.337138] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:03:45.506 [2024-07-15 17:22:40.337178] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=648 00:03:45.506 [2024-07-15 17:22:40.337216] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=31ef 00:03:45.506 [2024-07-15 17:22:40.337254] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1eb753ed, Actual=1ab753ed 00:03:45.506 [2024-07-15 17:22:40.337295] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3c574660, Actual=38574660 00:03:45.506 [2024-07-15 17:22:40.337335] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.506 [2024-07-15 17:22:40.337376] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.506 [2024-07-15 17:22:40.337416] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:03:45.506 [2024-07-15 17:22:40.337457] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:03:45.506 [2024-07-15 17:22:40.337497] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=c5e5b66 00:03:45.506 [2024-07-15 17:22:40.337535] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=6037886b 00:03:45.506 [2024-07-15 17:22:40.337573] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728acc20d3, Actual=a576a7728ecc20d3 00:03:45.506 [2024-07-15 17:22:40.337613] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4c37a266, Actual=88010a2d4837a266 00:03:45.506 [2024-07-15 17:22:40.337654] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.506 [2024-07-15 17:22:40.337694] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.506 [2024-07-15 17:22:40.337735] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:03:45.506 [2024-07-15 17:22:40.337775] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:03:45.506 passed 00:03:45.506 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-07-15 17:22:40.337816] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=ca7ef06b19f5984 00:03:45.506 [2024-07-15 17:22:40.337854] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=eeb8f15464c1785a 00:03:45.506 [2024-07-15 17:22:40.337894] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:03:45.506 [2024-07-15 17:22:40.337934] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:03:45.506 [2024-07-15 17:22:40.337975] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.506 [2024-07-15 17:22:40.338015] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.506 [2024-07-15 17:22:40.338056] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:03:45.506 [2024-07-15 17:22:40.338096] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:03:45.506 [2024-07-15 17:22:40.338136] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=648 00:03:45.506 passed 00:03:45.506 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-07-15 17:22:40.338174] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=31ef 00:03:45.506 [2024-07-15 17:22:40.338215] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1eb753ed, Actual=1ab753ed 00:03:45.506 [2024-07-15 17:22:40.338255] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3c574660, Actual=38574660 00:03:45.506 [2024-07-15 17:22:40.338296] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.506 [2024-07-15 17:22:40.338336] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.506 [2024-07-15 17:22:40.338376] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:03:45.506 [2024-07-15 17:22:40.338417] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:03:45.506 [2024-07-15 17:22:40.338457] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=c5e5b66 00:03:45.506 [2024-07-15 17:22:40.338495] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=6037886b 00:03:45.506 [2024-07-15 17:22:40.338533] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728acc20d3, Actual=a576a7728ecc20d3 00:03:45.506 [2024-07-15 17:22:40.338574] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4c37a266, Actual=88010a2d4837a266 00:03:45.506 [2024-07-15 17:22:40.338614] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.506 [2024-07-15 17:22:40.338654] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.506 [2024-07-15 17:22:40.338695] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:03:45.506 [2024-07-15 17:22:40.338736] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:03:45.506 [2024-07-15 17:22:40.338783] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=ca7ef06b19f5984 00:03:45.506 [2024-07-15 17:22:40.338822] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=eeb8f15464c1785a 00:03:45.506 passed 00:03:45.506 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-07-15 17:22:40.338863] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:03:45.506 [2024-07-15 17:22:40.338904] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fa21, Actual=fe21 00:03:45.506 [2024-07-15 17:22:40.338944] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.506 [2024-07-15 17:22:40.338984] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.506 [2024-07-15 17:22:40.339025] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:03:45.506 [2024-07-15 17:22:40.339066] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:03:45.506 [2024-07-15 17:22:40.339107] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=648 00:03:45.506 [2024-07-15 17:22:40.339144] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=31ef 00:03:45.506 passed 00:03:45.506 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-07-15 17:22:40.339184] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1eb753ed, Actual=1ab753ed 00:03:45.506 [2024-07-15 17:22:40.339225] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=3c574660, Actual=38574660 00:03:45.506 [2024-07-15 17:22:40.339266] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.506 [2024-07-15 17:22:40.339306] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.506 [2024-07-15 17:22:40.339347] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:03:45.506 [2024-07-15 17:22:40.339387] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:03:45.506 [2024-07-15 17:22:40.339428] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=c5e5b66 00:03:45.506 [2024-07-15 17:22:40.339465] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=6037886b 00:03:45.506 [2024-07-15 17:22:40.339504] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728acc20d3, Actual=a576a7728ecc20d3 00:03:45.506 [2024-07-15 17:22:40.339544] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4c37a266, Actual=88010a2d4837a266 00:03:45.506 [2024-07-15 17:22:40.339584] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.506 [2024-07-15 17:22:40.339625] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.506 [2024-07-15 17:22:40.339666] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:03:45.506 [2024-07-15 17:22:40.339706] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:03:45.506 [2024-07-15 17:22:40.339747] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=ca7ef06b19f5984 00:03:45.506 passed 00:03:45.506 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:03:45.506 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...[2024-07-15 17:22:40.339785] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=eeb8f15464c1785a 00:03:45.506 passed 00:03:45.506 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:03:45.506 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:03:45.507 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:03:45.507 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:03:45.507 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:03:45.507 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:03:45.507 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:03:45.507 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-15 17:22:40.345385] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=f94c, Actual=fd4c 00:03:45.507 [2024-07-15 17:22:40.345566] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=4d2, Actual=d2 00:03:45.507 [2024-07-15 17:22:40.345744] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=488 00:03:45.507 [2024-07-15 17:22:40.345924] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=488 00:03:45.507 [2024-07-15 17:22:40.346099] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=4000060 00:03:45.507 [2024-07-15 17:22:40.346275] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=4000060 00:03:45.507 [2024-07-15 17:22:40.346450] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=fd4c, Actual=648 00:03:45.507 [2024-07-15 17:22:40.346626] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=e4c1, Actual=2b0f 00:03:45.507 [2024-07-15 17:22:40.346806] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=1eb753ed, Actual=1ab753ed 00:03:45.507 [2024-07-15 17:22:40.346984] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=db72f83e, Actual=df72f83e 00:03:45.507 [2024-07-15 17:22:40.347162] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=488 00:03:45.507 [2024-07-15 17:22:40.347340] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=488 00:03:45.507 [2024-07-15 17:22:40.347517] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=4000060 00:03:45.507 [2024-07-15 17:22:40.347695] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=4000060 00:03:45.507 [2024-07-15 17:22:40.347873] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=1ab753ed, Actual=c5e5b66 00:03:45.507 [2024-07-15 17:22:40.348046] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=741688fe, Actual=2c7646f5 00:03:45.507 [2024-07-15 17:22:40.348233] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=a576a7728acc20d3, Actual=a576a7728ecc20d3 00:03:45.507 [2024-07-15 17:22:40.348415] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=99afa92d6168123d, Actual=99afa92d6568123d 00:03:45.507 [2024-07-15 17:22:40.348593] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=488 00:03:45.507 [2024-07-15 17:22:40.348778] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=488 00:03:45.507 [2024-07-15 17:22:40.348956] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=460 00:03:45.507 [2024-07-15 17:22:40.349134] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=460 00:03:45.507 [2024-07-15 17:22:40.349312] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=a576a7728ecc20d3, Actual=ca7ef06b19f5984 00:03:45.507 [2024-07-15 17:22:40.349490] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=4c87c8f68d0ed55f, Actual=2a3e338fa1f80f63 00:03:45.507 passed 00:03:45.507 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-15 17:22:40.349543] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:03:45.507 [2024-07-15 17:22:40.349587] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=cd05, Actual=c905 00:03:45.507 [2024-07-15 17:22:40.349630] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.507 [2024-07-15 17:22:40.349673] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.507 [2024-07-15 17:22:40.349716] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:03:45.507 [2024-07-15 17:22:40.349759] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:03:45.507 [2024-07-15 17:22:40.349802] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=648 00:03:45.507 [2024-07-15 17:22:40.349845] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=e2d8 00:03:45.507 [2024-07-15 17:22:40.349889] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1eb753ed, Actual=1ab753ed 00:03:45.507 [2024-07-15 17:22:40.349932] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=45c2306c, Actual=41c2306c 00:03:45.507 [2024-07-15 17:22:40.349975] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.507 [2024-07-15 17:22:40.350017] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.507 [2024-07-15 17:22:40.350060] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:03:45.507 [2024-07-15 17:22:40.350103] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:03:45.507 [2024-07-15 17:22:40.350146] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=c5e5b66 00:03:45.507 [2024-07-15 17:22:40.350188] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=b2c68ea7 00:03:45.507 [2024-07-15 17:22:40.350231] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728acc20d3, Actual=a576a7728ecc20d3 00:03:45.507 [2024-07-15 17:22:40.350275] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ed4e4bc0babcf7c0, Actual=ed4e4bc0bebcf7c0 00:03:45.507 [2024-07-15 17:22:40.350319] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.507 [2024-07-15 17:22:40.350361] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.507 [2024-07-15 17:22:40.350405] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:03:45.507 passed 00:03:45.507 Test: dix_sec_512_md_0_error ...passed 00:03:45.507 Test: dix_sec_512_md_8_prchk_0_single_iov ...[2024-07-15 17:22:40.350448] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:03:45.507 [2024-07-15 17:22:40.350491] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=ca7ef06b19f5984 00:03:45.507 [2024-07-15 17:22:40.350534] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=5edfd1627a2cea9e 00:03:45.507 [2024-07-15 17:22:40.350545] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:03:45.507 passed 00:03:45.507 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:03:45.507 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:03:45.507 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:03:45.507 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:03:45.507 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:03:45.507 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:03:45.507 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:03:45.507 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:03:45.507 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-15 17:22:40.355931] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=f94c, Actual=fd4c 00:03:45.507 [2024-07-15 17:22:40.356110] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=4d2, Actual=d2 00:03:45.507 [2024-07-15 17:22:40.356299] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=488 00:03:45.507 [2024-07-15 17:22:40.356481] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=488 00:03:45.507 [2024-07-15 17:22:40.356659] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=4000060 00:03:45.507 [2024-07-15 17:22:40.356833] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=4000060 00:03:45.507 [2024-07-15 17:22:40.357007] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=fd4c, Actual=648 00:03:45.507 [2024-07-15 17:22:40.357184] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=e4c1, Actual=2b0f 00:03:45.507 [2024-07-15 17:22:40.357357] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=1eb753ed, Actual=1ab753ed 00:03:45.507 [2024-07-15 17:22:40.357525] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=db72f83e, Actual=df72f83e 00:03:45.507 [2024-07-15 17:22:40.357699] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=488 00:03:45.507 [2024-07-15 17:22:40.357871] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=488 00:03:45.507 [2024-07-15 17:22:40.358044] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=4000060 00:03:45.507 [2024-07-15 17:22:40.358219] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=4000060 00:03:45.507 [2024-07-15 17:22:40.358391] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=1ab753ed, Actual=c5e5b66 00:03:45.507 [2024-07-15 17:22:40.358564] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=741688fe, Actual=2c7646f5 00:03:45.507 [2024-07-15 17:22:40.358740] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=a576a7728acc20d3, Actual=a576a7728ecc20d3 00:03:45.507 [2024-07-15 17:22:40.358920] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=99afa92d6168123d, Actual=99afa92d6568123d 00:03:45.507 [2024-07-15 17:22:40.359094] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=488 00:03:45.507 [2024-07-15 17:22:40.359267] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=96, Expected=88, Actual=488 00:03:45.507 [2024-07-15 17:22:40.359441] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=460 00:03:45.507 [2024-07-15 17:22:40.359614] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=96, Expected=60, Actual=460 00:03:45.507 [2024-07-15 17:22:40.359787] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=a576a7728ecc20d3, Actual=ca7ef06b19f5984 00:03:45.507 [2024-07-15 17:22:40.359967] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=96, Expected=4c87c8f68d0ed55f, Actual=2a3e338fa1f80f63 00:03:45.507 passed 00:03:45.508 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-15 17:22:40.360019] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=f94c, Actual=fd4c 00:03:45.508 [2024-07-15 17:22:40.360062] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=cd05, Actual=c905 00:03:45.508 [2024-07-15 17:22:40.360105] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.508 [2024-07-15 17:22:40.360148] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.508 [2024-07-15 17:22:40.360190] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:03:45.508 [2024-07-15 17:22:40.360241] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:03:45.508 [2024-07-15 17:22:40.360285] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=648 00:03:45.508 [2024-07-15 17:22:40.360328] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=e2d8 00:03:45.508 [2024-07-15 17:22:40.360370] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1eb753ed, Actual=1ab753ed 00:03:45.508 [2024-07-15 17:22:40.360413] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=45c2306c, Actual=41c2306c 00:03:45.508 [2024-07-15 17:22:40.360455] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.508 [2024-07-15 17:22:40.360497] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.508 [2024-07-15 17:22:40.360540] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:03:45.508 [2024-07-15 17:22:40.360582] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=4000058 00:03:45.508 [2024-07-15 17:22:40.360624] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=c5e5b66 00:03:45.508 [2024-07-15 17:22:40.360666] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=b2c68ea7 00:03:45.508 [2024-07-15 17:22:40.360709] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728acc20d3, Actual=a576a7728ecc20d3 00:03:45.508 [2024-07-15 17:22:40.360752] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ed4e4bc0babcf7c0, Actual=ed4e4bc0bebcf7c0 00:03:45.508 [2024-07-15 17:22:40.360794] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.508 [2024-07-15 17:22:40.360837] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=488 00:03:45.508 [2024-07-15 17:22:40.360880] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:03:45.508 [2024-07-15 17:22:40.360923] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=458 00:03:45.508 passed 00:03:45.508 Test: set_md_interleave_iovs_test ...[2024-07-15 17:22:40.360966] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=ca7ef06b19f5984 00:03:45.508 [2024-07-15 17:22:40.361008] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=5edfd1627a2cea9e 00:03:45.508 passed 00:03:45.508 Test: set_md_interleave_iovs_split_test ...passed 00:03:45.508 Test: dif_generate_stream_pi_16_test ...passed 00:03:45.508 Test: dif_generate_stream_test ...passed 00:03:45.508 Test: set_md_interleave_iovs_alignment_test ...[2024-07-15 17:22:40.361856] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1822:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:03:45.508 passed 00:03:45.508 Test: dif_generate_split_test ...passed 00:03:45.508 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:03:45.508 Test: dif_verify_split_test ...passed 00:03:45.508 Test: dif_verify_stream_multi_segments_test ...passed 00:03:45.508 Test: update_crc32c_pi_16_test ...passed 00:03:45.508 Test: update_crc32c_test ...passed 00:03:45.508 Test: dif_update_crc32c_split_test ...passed 00:03:45.508 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:03:45.508 Test: get_range_with_md_test ...passed 00:03:45.508 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:03:45.508 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:03:45.508 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:03:45.508 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:03:45.508 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:03:45.508 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:03:45.508 Test: dif_generate_and_verify_unmap_test ...passed 00:03:45.508 00:03:45.508 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.508 suites 1 1 n/a 0 0 00:03:45.508 tests 79 79 79 0 0 00:03:45.508 asserts 3584 3584 3584 0 n/a 00:03:45.508 00:03:45.508 Elapsed time = 0.047 seconds 00:03:45.508 17:22:40 unittest.unittest_util -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:03:45.508 00:03:45.508 00:03:45.508 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.508 http://cunit.sourceforge.net/ 00:03:45.508 00:03:45.508 00:03:45.508 Suite: iov 00:03:45.508 Test: test_single_iov ...passed 00:03:45.508 Test: test_simple_iov ...passed 00:03:45.508 Test: test_complex_iov ...passed 00:03:45.508 Test: test_iovs_to_buf ...passed 00:03:45.508 Test: test_buf_to_iovs ...passed 00:03:45.508 Test: test_memset ...passed 00:03:45.508 Test: test_iov_one ...passed 00:03:45.508 Test: test_iov_xfer ...passed 00:03:45.508 00:03:45.508 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.508 suites 1 1 n/a 0 0 00:03:45.508 tests 8 8 8 0 0 00:03:45.508 asserts 156 156 156 0 n/a 00:03:45.508 00:03:45.508 Elapsed time = 0.000 seconds 00:03:45.508 17:22:40 unittest.unittest_util -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:03:45.508 00:03:45.508 00:03:45.508 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.508 http://cunit.sourceforge.net/ 00:03:45.508 00:03:45.508 00:03:45.508 Suite: math 00:03:45.508 Test: test_serial_number_arithmetic ...passed 00:03:45.508 Suite: erase 00:03:45.508 Test: test_memset_s ...passed 00:03:45.508 00:03:45.508 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.508 suites 2 2 n/a 0 0 00:03:45.508 tests 2 2 2 0 0 00:03:45.508 asserts 18 18 18 0 n/a 00:03:45.508 00:03:45.508 Elapsed time = 0.000 seconds 00:03:45.508 17:22:40 unittest.unittest_util -- unit/unittest.sh@145 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:03:45.508 00:03:45.508 00:03:45.508 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.508 http://cunit.sourceforge.net/ 00:03:45.508 00:03:45.508 00:03:45.508 Suite: pipe 00:03:45.508 Test: test_create_destroy ...passed 00:03:45.508 Test: test_write_get_buffer ...passed 00:03:45.508 Test: test_write_advance ...passed 00:03:45.508 Test: test_read_get_buffer ...passed 00:03:45.508 Test: test_read_advance ...passed 00:03:45.508 Test: test_data ...passed 00:03:45.508 00:03:45.508 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.508 suites 1 1 n/a 0 0 00:03:45.508 tests 6 6 6 0 0 00:03:45.508 asserts 251 251 251 0 n/a 00:03:45.508 00:03:45.508 Elapsed time = 0.000 seconds 00:03:45.508 17:22:40 unittest.unittest_util -- unit/unittest.sh@146 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:03:45.508 00:03:45.508 00:03:45.508 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.508 http://cunit.sourceforge.net/ 00:03:45.508 00:03:45.508 00:03:45.508 Suite: xor 00:03:45.508 Test: test_xor_gen ...passed 00:03:45.508 00:03:45.508 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.508 suites 1 1 n/a 0 0 00:03:45.508 tests 1 1 1 0 0 00:03:45.508 asserts 17 17 17 0 n/a 00:03:45.508 00:03:45.508 Elapsed time = 0.000 seconds 00:03:45.508 00:03:45.508 real 0m0.129s 00:03:45.508 user 0m0.081s 00:03:45.508 sys 0m0.047s 00:03:45.508 17:22:40 unittest.unittest_util -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.508 17:22:40 unittest.unittest_util -- common/autotest_common.sh@10 -- # set +x 00:03:45.508 ************************************ 00:03:45.508 END TEST unittest_util 00:03:45.508 ************************************ 00:03:45.508 17:22:40 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:45.508 17:22:40 unittest -- unit/unittest.sh@284 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:45.509 17:22:40 unittest -- unit/unittest.sh@287 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:03:45.509 17:22:40 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.509 17:22:40 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.509 17:22:40 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:45.509 ************************************ 00:03:45.509 START TEST unittest_dma 00:03:45.509 ************************************ 00:03:45.509 17:22:40 unittest.unittest_dma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:03:45.509 00:03:45.509 00:03:45.509 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.509 http://cunit.sourceforge.net/ 00:03:45.509 00:03:45.509 00:03:45.509 Suite: dma_suite 00:03:45.509 Test: test_dma ...[2024-07-15 17:22:40.442388] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 56:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:03:45.509 passed 00:03:45.509 00:03:45.509 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.509 suites 1 1 n/a 0 0 00:03:45.509 tests 1 1 1 0 0 00:03:45.509 asserts 54 54 54 0 n/a 00:03:45.509 00:03:45.509 Elapsed time = 0.000 seconds 00:03:45.509 00:03:45.509 real 0m0.005s 00:03:45.509 user 0m0.005s 00:03:45.509 sys 0m0.004s 00:03:45.509 17:22:40 unittest.unittest_dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.509 17:22:40 unittest.unittest_dma -- common/autotest_common.sh@10 -- # set +x 00:03:45.509 ************************************ 00:03:45.509 END TEST unittest_dma 00:03:45.509 ************************************ 00:03:45.509 17:22:40 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:45.509 17:22:40 unittest -- unit/unittest.sh@289 -- # run_test unittest_init unittest_init 00:03:45.509 17:22:40 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.509 17:22:40 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.509 17:22:40 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:45.509 ************************************ 00:03:45.509 START TEST unittest_init 00:03:45.509 ************************************ 00:03:45.509 17:22:40 unittest.unittest_init -- common/autotest_common.sh@1123 -- # unittest_init 00:03:45.509 17:22:40 unittest.unittest_init -- unit/unittest.sh@150 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:03:45.509 00:03:45.509 00:03:45.509 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.509 http://cunit.sourceforge.net/ 00:03:45.509 00:03:45.509 00:03:45.509 Suite: subsystem_suite 00:03:45.509 Test: subsystem_sort_test_depends_on_single ...passed 00:03:45.509 Test: subsystem_sort_test_depends_on_multiple ...passed 00:03:45.509 Test: subsystem_sort_test_missing_dependency ...[2024-07-15 17:22:40.483476] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 197:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:03:45.509 [2024-07-15 17:22:40.483901] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:03:45.509 passed 00:03:45.509 00:03:45.509 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.509 suites 1 1 n/a 0 0 00:03:45.509 tests 3 3 3 0 0 00:03:45.509 asserts 20 20 20 0 n/a 00:03:45.509 00:03:45.509 Elapsed time = 0.000 seconds 00:03:45.509 00:03:45.509 real 0m0.005s 00:03:45.509 user 0m0.004s 00:03:45.509 sys 0m0.004s 00:03:45.509 17:22:40 unittest.unittest_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.509 ************************************ 00:03:45.509 END TEST unittest_init 00:03:45.509 ************************************ 00:03:45.509 17:22:40 unittest.unittest_init -- common/autotest_common.sh@10 -- # set +x 00:03:45.509 17:22:40 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:45.509 17:22:40 unittest -- unit/unittest.sh@290 -- # run_test unittest_keyring /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:03:45.509 17:22:40 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.509 17:22:40 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.509 17:22:40 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:45.509 ************************************ 00:03:45.509 START TEST unittest_keyring 00:03:45.509 ************************************ 00:03:45.509 17:22:40 unittest.unittest_keyring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:03:45.509 00:03:45.509 00:03:45.509 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.509 http://cunit.sourceforge.net/ 00:03:45.509 00:03:45.509 00:03:45.509 Suite: keyring 00:03:45.509 Test: test_keyring_add_remove ...[2024-07-15 17:22:40.526833] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' already exists 00:03:45.509 passed 00:03:45.509 Test: test_keyring_get_put ...passed 00:03:45.509 00:03:45.509 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.509 suites 1 1 n/a 0 0 00:03:45.509 tests 2 2 2 0 0 00:03:45.509 asserts 44 44 44 0 n/a 00:03:45.509 00:03:45.509 Elapsed time = 0.000 seconds 00:03:45.509 [2024-07-15 17:22:40.527077] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists 00:03:45.509 [2024-07-15 17:22:40.527102] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:03:45.509 00:03:45.509 real 0m0.006s 00:03:45.509 user 0m0.000s 00:03:45.509 sys 0m0.008s 00:03:45.509 17:22:40 unittest.unittest_keyring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.509 17:22:40 unittest.unittest_keyring -- common/autotest_common.sh@10 -- # set +x 00:03:45.509 ************************************ 00:03:45.509 END TEST unittest_keyring 00:03:45.509 ************************************ 00:03:45.509 17:22:40 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:45.509 00:03:45.509 00:03:45.509 17:22:40 unittest -- unit/unittest.sh@292 -- # '[' no = yes ']' 00:03:45.509 17:22:40 unittest -- unit/unittest.sh@305 -- # set +x 00:03:45.509 ===================== 00:03:45.509 All unit tests passed 00:03:45.509 ===================== 00:03:45.509 WARN: lcov not installed or SPDK built without coverage! 00:03:45.509 WARN: neither valgrind nor ASAN is enabled! 00:03:45.509 00:03:45.509 00:03:45.509 00:03:45.509 real 0m31.128s 00:03:45.509 user 0m13.159s 00:03:45.509 sys 0m1.410s 00:03:45.509 17:22:40 unittest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.509 17:22:40 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:45.509 ************************************ 00:03:45.509 END TEST unittest 00:03:45.509 ************************************ 00:03:45.509 17:22:40 -- common/autotest_common.sh@1142 -- # return 0 00:03:45.509 17:22:40 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:45.509 17:22:40 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:45.509 17:22:40 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:45.509 17:22:40 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:45.509 17:22:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:45.509 17:22:40 -- common/autotest_common.sh@10 -- # set +x 00:03:45.509 17:22:40 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:03:45.509 17:22:40 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:45.509 17:22:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.509 17:22:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.509 17:22:40 -- common/autotest_common.sh@10 -- # set +x 00:03:45.509 ************************************ 00:03:45.509 START TEST env 00:03:45.509 ************************************ 00:03:45.509 17:22:40 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:03:45.509 * Looking for test storage... 00:03:45.509 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:03:45.509 17:22:40 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:45.509 17:22:40 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.509 17:22:40 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.509 17:22:40 env -- common/autotest_common.sh@10 -- # set +x 00:03:45.509 ************************************ 00:03:45.509 START TEST env_memory 00:03:45.509 ************************************ 00:03:45.509 17:22:40 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:03:45.509 00:03:45.509 00:03:45.509 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.509 http://cunit.sourceforge.net/ 00:03:45.509 00:03:45.509 00:03:45.509 Suite: memory 00:03:45.509 Test: alloc and free memory map ...[2024-07-15 17:22:40.768913] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:45.509 passed 00:03:45.509 Test: mem map translation ...[2024-07-15 17:22:40.776340] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:45.509 [2024-07-15 17:22:40.776374] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:45.509 [2024-07-15 17:22:40.776391] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:45.509 [2024-07-15 17:22:40.776401] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:45.509 passed 00:03:45.509 Test: mem map registration ...[2024-07-15 17:22:40.784947] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:45.509 [2024-07-15 17:22:40.784971] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:45.509 passed 00:03:45.509 Test: mem map adjacent registrations ...passed 00:03:45.509 00:03:45.509 Run Summary: Type Total Ran Passed Failed Inactive 00:03:45.509 suites 1 1 n/a 0 0 00:03:45.509 tests 4 4 4 0 0 00:03:45.509 asserts 152 152 152 0 n/a 00:03:45.509 00:03:45.509 Elapsed time = 0.039 seconds 00:03:45.509 00:03:45.509 real 0m0.042s 00:03:45.509 user 0m0.025s 00:03:45.509 sys 0m0.017s 00:03:45.509 17:22:40 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:45.509 17:22:40 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:45.509 ************************************ 00:03:45.509 END TEST env_memory 00:03:45.509 ************************************ 00:03:45.509 17:22:40 env -- common/autotest_common.sh@1142 -- # return 0 00:03:45.510 17:22:40 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:45.510 17:22:40 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:45.510 17:22:40 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.510 17:22:40 env -- common/autotest_common.sh@10 -- # set +x 00:03:45.510 ************************************ 00:03:45.510 START TEST env_vtophys 00:03:45.510 ************************************ 00:03:45.510 17:22:40 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:03:45.510 EAL: lib.eal log level changed from notice to debug 00:03:45.510 EAL: Sysctl reports 10 cpus 00:03:45.510 EAL: Detected lcore 0 as core 0 on socket 0 00:03:45.510 EAL: Detected lcore 1 as core 0 on socket 0 00:03:45.510 EAL: Detected lcore 2 as core 0 on socket 0 00:03:45.510 EAL: Detected lcore 3 as core 0 on socket 0 00:03:45.510 EAL: Detected lcore 4 as core 0 on socket 0 00:03:45.510 EAL: Detected lcore 5 as core 0 on socket 0 00:03:45.510 EAL: Detected lcore 6 as core 0 on socket 0 00:03:45.510 EAL: Detected lcore 7 as core 0 on socket 0 00:03:45.510 EAL: Detected lcore 8 as core 0 on socket 0 00:03:45.510 EAL: Detected lcore 9 as core 0 on socket 0 00:03:45.510 EAL: Maximum logical cores by configuration: 128 00:03:45.510 EAL: Detected CPU lcores: 10 00:03:45.510 EAL: Detected NUMA nodes: 1 00:03:45.510 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:45.510 EAL: Checking presence of .so 'librte_eal.so.24' 00:03:45.510 EAL: Checking presence of .so 'librte_eal.so' 00:03:45.510 EAL: Detected static linkage of DPDK 00:03:45.510 EAL: No shared files mode enabled, IPC will be disabled 00:03:45.510 EAL: PCI scan found 10 devices 00:03:45.510 EAL: Specific IOVA mode is not requested, autodetecting 00:03:45.510 EAL: Selecting IOVA mode according to bus requests 00:03:45.510 EAL: Bus pci wants IOVA as 'PA' 00:03:45.510 EAL: Selected IOVA mode 'PA' 00:03:45.510 EAL: Contigmem driver has 8 buffers, each of size 256MB 00:03:45.510 EAL: Ask a virtual area of 0x2e000 bytes 00:03:45.510 EAL: WARNING! Base virtual address hint (0x1000005000 != 0x100039d000) not respected! 00:03:45.510 EAL: This may cause issues with mapping memory into secondary processes 00:03:45.510 EAL: Virtual area found at 0x100039d000 (size = 0x2e000) 00:03:45.510 EAL: Setting up physically contiguous memory... 00:03:45.510 EAL: Ask a virtual area of 0x1000 bytes 00:03:45.510 EAL: WARNING! Base virtual address hint (0x100000b000 != 0x1001180000) not respected! 00:03:45.510 EAL: This may cause issues with mapping memory into secondary processes 00:03:45.510 EAL: Virtual area found at 0x1001180000 (size = 0x1000) 00:03:45.510 EAL: Memseg list allocated at socket 0, page size 0x40000kB 00:03:45.510 EAL: Ask a virtual area of 0xf0000000 bytes 00:03:45.510 EAL: WARNING! Base virtual address hint (0x105000c000 != 0x1060000000) not respected! 00:03:45.510 EAL: This may cause issues with mapping memory into secondary processes 00:03:45.510 EAL: Virtual area found at 0x1060000000 (size = 0xf0000000) 00:03:45.510 EAL: VA reserved for memseg list at 0x1060000000, size f0000000 00:03:45.510 EAL: Mapped memory segment 0 @ 0x1060000000: physaddr:0x1f0000000, len 268435456 00:03:45.510 EAL: Mapped memory segment 1 @ 0x1080000000: physaddr:0x210000000, len 268435456 00:03:45.510 EAL: Mapped memory segment 2 @ 0x1070000000: physaddr:0x220000000, len 268435456 00:03:45.510 EAL: Mapped memory segment 3 @ 0x1090000000: physaddr:0x230000000, len 268435456 00:03:45.510 EAL: Mapped memory segment 4 @ 0x10a0000000: physaddr:0x240000000, len 268435456 00:03:45.510 EAL: Mapped memory segment 5 @ 0x10c0000000: physaddr:0x270000000, len 268435456 00:03:45.768 EAL: Mapped memory segment 6 @ 0x10b0000000: physaddr:0x280000000, len 268435456 00:03:45.768 EAL: Mapped memory segment 7 @ 0x10d0000000: physaddr:0x290000000, len 268435456 00:03:45.768 EAL: No shared files mode enabled, IPC is disabled 00:03:45.768 EAL: Added 2048M to heap on socket 0 00:03:45.768 EAL: TSC is not safe to use in SMP mode 00:03:45.768 EAL: TSC is not invariant 00:03:45.768 EAL: TSC frequency is ~2199996 KHz 00:03:45.768 EAL: Main lcore 0 is ready (tid=263f1a612000;cpuset=[0]) 00:03:45.768 EAL: PCI scan found 10 devices 00:03:45.768 EAL: Registering mem event callbacks not supported 00:03:45.768 00:03:45.768 00:03:45.768 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.768 http://cunit.sourceforge.net/ 00:03:45.768 00:03:45.768 00:03:45.768 Suite: components_suite 00:03:45.768 Test: vtophys_malloc_test ...passed 00:03:46.026 Test: vtophys_spdk_malloc_test ...passed 00:03:46.026 00:03:46.026 Run Summary: Type Total Ran Passed Failed Inactive 00:03:46.026 suites 1 1 n/a 0 0 00:03:46.026 tests 2 2 2 0 0 00:03:46.026 asserts 539 539 539 0 n/a 00:03:46.026 00:03:46.026 Elapsed time = 0.375 seconds 00:03:46.026 00:03:46.026 real 0m0.966s 00:03:46.026 user 0m0.384s 00:03:46.026 sys 0m0.583s 00:03:46.026 ************************************ 00:03:46.026 END TEST env_vtophys 00:03:46.026 ************************************ 00:03:46.026 17:22:41 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:46.026 17:22:41 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:46.026 17:22:41 env -- common/autotest_common.sh@1142 -- # return 0 00:03:46.026 17:22:41 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:46.026 17:22:41 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.026 17:22:41 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.026 17:22:41 env -- common/autotest_common.sh@10 -- # set +x 00:03:46.026 ************************************ 00:03:46.026 START TEST env_pci 00:03:46.026 ************************************ 00:03:46.026 17:22:41 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:03:46.026 00:03:46.026 00:03:46.026 CUnit - A unit testing framework for C - Version 2.1-3 00:03:46.026 http://cunit.sourceforge.net/ 00:03:46.026 00:03:46.026 00:03:46.026 Suite: pci 00:03:46.026 Test: pci_hook ...passed 00:03:46.026 00:03:46.026 Run Summary: Type Total Ran Passed Failed Inactive 00:03:46.026 suites 1 1 n/a 0 0 00:03:46.026 tests 1 1 1 0 0 00:03:46.026 asserts 25 25 25 0 n/a 00:03:46.026 00:03:46.026 Elapsed time = 0.000 seconds 00:03:46.026 EAL: Cannot find device (10000:00:01.0) 00:03:46.026 EAL: Failed to attach device on primary process 00:03:46.026 00:03:46.026 real 0m0.009s 00:03:46.026 user 0m0.008s 00:03:46.026 sys 0m0.005s 00:03:46.026 17:22:41 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:46.026 17:22:41 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:46.026 ************************************ 00:03:46.026 END TEST env_pci 00:03:46.026 ************************************ 00:03:46.284 17:22:41 env -- common/autotest_common.sh@1142 -- # return 0 00:03:46.284 17:22:41 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:46.284 17:22:41 env -- env/env.sh@15 -- # uname 00:03:46.284 17:22:41 env -- env/env.sh@15 -- # '[' FreeBSD = Linux ']' 00:03:46.284 17:22:41 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 00:03:46.284 17:22:41 env -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:03:46.284 17:22:41 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.284 17:22:41 env -- common/autotest_common.sh@10 -- # set +x 00:03:46.284 ************************************ 00:03:46.284 START TEST env_dpdk_post_init 00:03:46.284 ************************************ 00:03:46.284 17:22:41 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 00:03:46.284 EAL: Sysctl reports 10 cpus 00:03:46.284 EAL: Detected CPU lcores: 10 00:03:46.284 EAL: Detected NUMA nodes: 1 00:03:46.284 EAL: Detected static linkage of DPDK 00:03:46.284 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:46.284 EAL: Selected IOVA mode 'PA' 00:03:46.284 EAL: Contigmem driver has 8 buffers, each of size 256MB 00:03:46.284 EAL: Mapped memory segment 0 @ 0x1060000000: physaddr:0x1f0000000, len 268435456 00:03:46.284 EAL: Mapped memory segment 1 @ 0x1080000000: physaddr:0x210000000, len 268435456 00:03:46.284 EAL: Mapped memory segment 2 @ 0x1070000000: physaddr:0x220000000, len 268435456 00:03:46.542 EAL: Mapped memory segment 3 @ 0x1090000000: physaddr:0x230000000, len 268435456 00:03:46.542 EAL: Mapped memory segment 4 @ 0x10a0000000: physaddr:0x240000000, len 268435456 00:03:46.542 EAL: Mapped memory segment 5 @ 0x10c0000000: physaddr:0x270000000, len 268435456 00:03:46.542 EAL: Mapped memory segment 6 @ 0x10b0000000: physaddr:0x280000000, len 268435456 00:03:46.800 EAL: Mapped memory segment 7 @ 0x10d0000000: physaddr:0x290000000, len 268435456 00:03:46.800 EAL: TSC is not safe to use in SMP mode 00:03:46.800 EAL: TSC is not invariant 00:03:46.800 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:46.800 [2024-07-15 17:22:42.436872] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:03:46.800 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:03:46.800 Starting DPDK initialization... 00:03:46.800 Starting SPDK post initialization... 00:03:46.800 SPDK NVMe probe 00:03:46.800 Attaching to 0000:00:10.0 00:03:46.800 Attached to 0000:00:10.0 00:03:46.800 Cleaning up... 00:03:46.800 00:03:46.800 real 0m0.580s 00:03:46.800 user 0m0.004s 00:03:46.800 sys 0m0.580s 00:03:46.800 17:22:42 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:46.800 17:22:42 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:46.800 ************************************ 00:03:46.800 END TEST env_dpdk_post_init 00:03:46.800 ************************************ 00:03:46.800 17:22:42 env -- common/autotest_common.sh@1142 -- # return 0 00:03:46.800 17:22:42 env -- env/env.sh@26 -- # uname 00:03:46.800 17:22:42 env -- env/env.sh@26 -- # '[' FreeBSD = Linux ']' 00:03:46.800 00:03:46.800 real 0m1.909s 00:03:46.800 user 0m0.582s 00:03:46.800 sys 0m1.361s 00:03:46.800 17:22:42 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:46.800 17:22:42 env -- common/autotest_common.sh@10 -- # set +x 00:03:46.800 ************************************ 00:03:46.800 END TEST env 00:03:46.800 ************************************ 00:03:46.800 17:22:42 -- common/autotest_common.sh@1142 -- # return 0 00:03:46.800 17:22:42 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:46.800 17:22:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.800 17:22:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.800 17:22:42 -- common/autotest_common.sh@10 -- # set +x 00:03:46.800 ************************************ 00:03:46.800 START TEST rpc 00:03:46.800 ************************************ 00:03:46.800 17:22:42 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:03:47.058 * Looking for test storage... 00:03:47.058 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:47.058 17:22:42 rpc -- rpc/rpc.sh@65 -- # spdk_pid=45483 00:03:47.058 17:22:42 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:47.058 17:22:42 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:03:47.058 17:22:42 rpc -- rpc/rpc.sh@67 -- # waitforlisten 45483 00:03:47.058 17:22:42 rpc -- common/autotest_common.sh@829 -- # '[' -z 45483 ']' 00:03:47.058 17:22:42 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:47.058 17:22:42 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:47.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:47.058 17:22:42 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:47.058 17:22:42 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:47.058 17:22:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:47.058 [2024-07-15 17:22:42.720430] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:03:47.059 [2024-07-15 17:22:42.720621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:03:47.624 EAL: TSC is not safe to use in SMP mode 00:03:47.624 EAL: TSC is not invariant 00:03:47.624 [2024-07-15 17:22:43.284431] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:47.624 [2024-07-15 17:22:43.390706] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:03:47.624 [2024-07-15 17:22:43.393471] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:47.624 [2024-07-15 17:22:43.393510] app.c: 607:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 45483' to capture a snapshot of events at runtime. 00:03:47.624 [2024-07-15 17:22:43.393531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:48.191 17:22:43 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:48.191 17:22:43 rpc -- common/autotest_common.sh@862 -- # return 0 00:03:48.191 17:22:43 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:48.191 17:22:43 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:03:48.191 17:22:43 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:48.191 17:22:43 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:48.191 17:22:43 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.191 17:22:43 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.191 17:22:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.191 ************************************ 00:03:48.191 START TEST rpc_integrity 00:03:48.191 ************************************ 00:03:48.191 17:22:43 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:48.191 17:22:43 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:48.191 17:22:43 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:48.191 17:22:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.191 17:22:43 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:48.191 17:22:43 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:48.191 17:22:43 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:48.191 17:22:43 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:48.191 17:22:43 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:48.191 17:22:43 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:48.191 17:22:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.191 17:22:43 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:48.191 17:22:43 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:48.191 17:22:43 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:48.191 17:22:43 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:48.191 17:22:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.191 17:22:43 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:48.191 17:22:43 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:48.191 { 00:03:48.191 "name": "Malloc0", 00:03:48.191 "aliases": [ 00:03:48.191 "d8b69c54-42ce-11ef-96ac-773515fba644" 00:03:48.191 ], 00:03:48.191 "product_name": "Malloc disk", 00:03:48.191 "block_size": 512, 00:03:48.191 "num_blocks": 16384, 00:03:48.191 "uuid": "d8b69c54-42ce-11ef-96ac-773515fba644", 00:03:48.191 "assigned_rate_limits": { 00:03:48.191 "rw_ios_per_sec": 0, 00:03:48.191 "rw_mbytes_per_sec": 0, 00:03:48.191 "r_mbytes_per_sec": 0, 00:03:48.191 "w_mbytes_per_sec": 0 00:03:48.191 }, 00:03:48.191 "claimed": false, 00:03:48.191 "zoned": false, 00:03:48.191 "supported_io_types": { 00:03:48.191 "read": true, 00:03:48.191 "write": true, 00:03:48.191 "unmap": true, 00:03:48.191 "flush": true, 00:03:48.191 "reset": true, 00:03:48.191 "nvme_admin": false, 00:03:48.191 "nvme_io": false, 00:03:48.191 "nvme_io_md": false, 00:03:48.191 "write_zeroes": true, 00:03:48.191 "zcopy": true, 00:03:48.191 "get_zone_info": false, 00:03:48.191 "zone_management": false, 00:03:48.191 "zone_append": false, 00:03:48.191 "compare": false, 00:03:48.191 "compare_and_write": false, 00:03:48.191 "abort": true, 00:03:48.191 "seek_hole": false, 00:03:48.191 "seek_data": false, 00:03:48.191 "copy": true, 00:03:48.191 "nvme_iov_md": false 00:03:48.191 }, 00:03:48.191 "memory_domains": [ 00:03:48.191 { 00:03:48.191 "dma_device_id": "system", 00:03:48.191 "dma_device_type": 1 00:03:48.191 }, 00:03:48.191 { 00:03:48.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:48.191 "dma_device_type": 2 00:03:48.191 } 00:03:48.191 ], 00:03:48.191 "driver_specific": {} 00:03:48.191 } 00:03:48.191 ]' 00:03:48.191 17:22:43 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:48.191 17:22:43 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:48.191 17:22:43 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:48.191 17:22:43 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:48.191 17:22:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.191 [2024-07-15 17:22:43.871684] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:48.191 [2024-07-15 17:22:43.871734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:48.191 [2024-07-15 17:22:43.872333] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x280177237a00 00:03:48.191 [2024-07-15 17:22:43.872362] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:48.191 [2024-07-15 17:22:43.873258] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:48.191 [2024-07-15 17:22:43.873286] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:48.191 Passthru0 00:03:48.191 17:22:43 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:48.191 17:22:43 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:48.191 17:22:43 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:48.191 17:22:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.191 17:22:43 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:48.191 17:22:43 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:48.191 { 00:03:48.191 "name": "Malloc0", 00:03:48.191 "aliases": [ 00:03:48.191 "d8b69c54-42ce-11ef-96ac-773515fba644" 00:03:48.191 ], 00:03:48.191 "product_name": "Malloc disk", 00:03:48.191 "block_size": 512, 00:03:48.191 "num_blocks": 16384, 00:03:48.191 "uuid": "d8b69c54-42ce-11ef-96ac-773515fba644", 00:03:48.191 "assigned_rate_limits": { 00:03:48.191 "rw_ios_per_sec": 0, 00:03:48.191 "rw_mbytes_per_sec": 0, 00:03:48.191 "r_mbytes_per_sec": 0, 00:03:48.191 "w_mbytes_per_sec": 0 00:03:48.191 }, 00:03:48.191 "claimed": true, 00:03:48.191 "claim_type": "exclusive_write", 00:03:48.191 "zoned": false, 00:03:48.191 "supported_io_types": { 00:03:48.191 "read": true, 00:03:48.191 "write": true, 00:03:48.191 "unmap": true, 00:03:48.191 "flush": true, 00:03:48.191 "reset": true, 00:03:48.191 "nvme_admin": false, 00:03:48.191 "nvme_io": false, 00:03:48.191 "nvme_io_md": false, 00:03:48.191 "write_zeroes": true, 00:03:48.191 "zcopy": true, 00:03:48.191 "get_zone_info": false, 00:03:48.191 "zone_management": false, 00:03:48.191 "zone_append": false, 00:03:48.191 "compare": false, 00:03:48.191 "compare_and_write": false, 00:03:48.191 "abort": true, 00:03:48.191 "seek_hole": false, 00:03:48.191 "seek_data": false, 00:03:48.191 "copy": true, 00:03:48.191 "nvme_iov_md": false 00:03:48.191 }, 00:03:48.191 "memory_domains": [ 00:03:48.191 { 00:03:48.191 "dma_device_id": "system", 00:03:48.191 "dma_device_type": 1 00:03:48.191 }, 00:03:48.191 { 00:03:48.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:48.191 "dma_device_type": 2 00:03:48.191 } 00:03:48.191 ], 00:03:48.191 "driver_specific": {} 00:03:48.191 }, 00:03:48.191 { 00:03:48.191 "name": "Passthru0", 00:03:48.191 "aliases": [ 00:03:48.191 "91ae3609-5e7d-ea52-89bb-a60b8c37ac43" 00:03:48.191 ], 00:03:48.191 "product_name": "passthru", 00:03:48.191 "block_size": 512, 00:03:48.191 "num_blocks": 16384, 00:03:48.191 "uuid": "91ae3609-5e7d-ea52-89bb-a60b8c37ac43", 00:03:48.191 "assigned_rate_limits": { 00:03:48.191 "rw_ios_per_sec": 0, 00:03:48.191 "rw_mbytes_per_sec": 0, 00:03:48.191 "r_mbytes_per_sec": 0, 00:03:48.191 "w_mbytes_per_sec": 0 00:03:48.191 }, 00:03:48.191 "claimed": false, 00:03:48.191 "zoned": false, 00:03:48.191 "supported_io_types": { 00:03:48.191 "read": true, 00:03:48.191 "write": true, 00:03:48.191 "unmap": true, 00:03:48.191 "flush": true, 00:03:48.191 "reset": true, 00:03:48.191 "nvme_admin": false, 00:03:48.191 "nvme_io": false, 00:03:48.191 "nvme_io_md": false, 00:03:48.191 "write_zeroes": true, 00:03:48.191 "zcopy": true, 00:03:48.191 "get_zone_info": false, 00:03:48.191 "zone_management": false, 00:03:48.191 "zone_append": false, 00:03:48.191 "compare": false, 00:03:48.191 "compare_and_write": false, 00:03:48.191 "abort": true, 00:03:48.191 "seek_hole": false, 00:03:48.191 "seek_data": false, 00:03:48.191 "copy": true, 00:03:48.191 "nvme_iov_md": false 00:03:48.191 }, 00:03:48.191 "memory_domains": [ 00:03:48.191 { 00:03:48.191 "dma_device_id": "system", 00:03:48.191 "dma_device_type": 1 00:03:48.191 }, 00:03:48.191 { 00:03:48.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:48.191 "dma_device_type": 2 00:03:48.191 } 00:03:48.191 ], 00:03:48.191 "driver_specific": { 00:03:48.191 "passthru": { 00:03:48.191 "name": "Passthru0", 00:03:48.191 "base_bdev_name": "Malloc0" 00:03:48.191 } 00:03:48.191 } 00:03:48.191 } 00:03:48.191 ]' 00:03:48.191 17:22:43 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:48.191 17:22:43 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:48.191 17:22:43 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:48.191 17:22:43 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:48.191 17:22:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.191 17:22:43 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:48.191 17:22:43 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:48.191 17:22:43 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:48.191 17:22:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.191 17:22:43 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:48.191 17:22:43 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:48.192 17:22:43 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:48.192 17:22:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.192 17:22:43 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:48.192 17:22:43 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:48.192 17:22:43 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:48.192 17:22:43 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:48.192 00:03:48.192 real 0m0.136s 00:03:48.192 user 0m0.030s 00:03:48.192 sys 0m0.045s 00:03:48.192 17:22:43 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.192 17:22:43 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.192 ************************************ 00:03:48.192 END TEST rpc_integrity 00:03:48.192 ************************************ 00:03:48.192 17:22:43 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:48.192 17:22:43 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:48.192 17:22:43 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.192 17:22:43 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.192 17:22:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.192 ************************************ 00:03:48.192 START TEST rpc_plugins 00:03:48.192 ************************************ 00:03:48.192 17:22:43 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:03:48.192 17:22:43 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:48.192 17:22:43 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:48.192 17:22:43 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:48.192 17:22:44 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:48.192 17:22:44 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:48.192 17:22:44 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:48.192 17:22:44 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:48.192 17:22:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:48.192 17:22:44 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:48.192 17:22:44 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:48.192 { 00:03:48.192 "name": "Malloc1", 00:03:48.192 "aliases": [ 00:03:48.192 "d8cfa182-42ce-11ef-96ac-773515fba644" 00:03:48.192 ], 00:03:48.192 "product_name": "Malloc disk", 00:03:48.192 "block_size": 4096, 00:03:48.192 "num_blocks": 256, 00:03:48.192 "uuid": "d8cfa182-42ce-11ef-96ac-773515fba644", 00:03:48.192 "assigned_rate_limits": { 00:03:48.192 "rw_ios_per_sec": 0, 00:03:48.192 "rw_mbytes_per_sec": 0, 00:03:48.192 "r_mbytes_per_sec": 0, 00:03:48.192 "w_mbytes_per_sec": 0 00:03:48.192 }, 00:03:48.192 "claimed": false, 00:03:48.192 "zoned": false, 00:03:48.192 "supported_io_types": { 00:03:48.192 "read": true, 00:03:48.192 "write": true, 00:03:48.192 "unmap": true, 00:03:48.192 "flush": true, 00:03:48.192 "reset": true, 00:03:48.192 "nvme_admin": false, 00:03:48.192 "nvme_io": false, 00:03:48.192 "nvme_io_md": false, 00:03:48.192 "write_zeroes": true, 00:03:48.192 "zcopy": true, 00:03:48.192 "get_zone_info": false, 00:03:48.192 "zone_management": false, 00:03:48.192 "zone_append": false, 00:03:48.192 "compare": false, 00:03:48.192 "compare_and_write": false, 00:03:48.192 "abort": true, 00:03:48.192 "seek_hole": false, 00:03:48.192 "seek_data": false, 00:03:48.192 "copy": true, 00:03:48.192 "nvme_iov_md": false 00:03:48.192 }, 00:03:48.192 "memory_domains": [ 00:03:48.192 { 00:03:48.192 "dma_device_id": "system", 00:03:48.192 "dma_device_type": 1 00:03:48.192 }, 00:03:48.192 { 00:03:48.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:48.192 "dma_device_type": 2 00:03:48.192 } 00:03:48.192 ], 00:03:48.192 "driver_specific": {} 00:03:48.192 } 00:03:48.192 ]' 00:03:48.451 17:22:44 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:48.451 17:22:44 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:48.451 17:22:44 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:48.451 17:22:44 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:48.451 17:22:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:48.451 17:22:44 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:48.451 17:22:44 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:48.451 17:22:44 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:48.451 17:22:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:48.451 17:22:44 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:48.451 17:22:44 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:48.451 17:22:44 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:48.451 17:22:44 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:48.451 00:03:48.451 real 0m0.071s 00:03:48.451 user 0m0.023s 00:03:48.451 sys 0m0.016s 00:03:48.451 17:22:44 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.451 17:22:44 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:48.451 ************************************ 00:03:48.451 END TEST rpc_plugins 00:03:48.451 ************************************ 00:03:48.451 17:22:44 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:48.451 17:22:44 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:48.451 17:22:44 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.451 17:22:44 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.451 17:22:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.451 ************************************ 00:03:48.451 START TEST rpc_trace_cmd_test 00:03:48.451 ************************************ 00:03:48.451 17:22:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:03:48.451 17:22:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:48.451 17:22:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:48.451 17:22:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:48.451 17:22:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:48.451 17:22:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:48.451 17:22:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:48.451 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid45483", 00:03:48.451 "tpoint_group_mask": "0x8", 00:03:48.451 "iscsi_conn": { 00:03:48.451 "mask": "0x2", 00:03:48.451 "tpoint_mask": "0x0" 00:03:48.451 }, 00:03:48.451 "scsi": { 00:03:48.451 "mask": "0x4", 00:03:48.451 "tpoint_mask": "0x0" 00:03:48.451 }, 00:03:48.451 "bdev": { 00:03:48.451 "mask": "0x8", 00:03:48.451 "tpoint_mask": "0xffffffffffffffff" 00:03:48.451 }, 00:03:48.451 "nvmf_rdma": { 00:03:48.451 "mask": "0x10", 00:03:48.451 "tpoint_mask": "0x0" 00:03:48.451 }, 00:03:48.451 "nvmf_tcp": { 00:03:48.451 "mask": "0x20", 00:03:48.451 "tpoint_mask": "0x0" 00:03:48.451 }, 00:03:48.451 "blobfs": { 00:03:48.451 "mask": "0x80", 00:03:48.451 "tpoint_mask": "0x0" 00:03:48.451 }, 00:03:48.451 "dsa": { 00:03:48.451 "mask": "0x200", 00:03:48.451 "tpoint_mask": "0x0" 00:03:48.451 }, 00:03:48.451 "thread": { 00:03:48.451 "mask": "0x400", 00:03:48.451 "tpoint_mask": "0x0" 00:03:48.451 }, 00:03:48.451 "nvme_pcie": { 00:03:48.451 "mask": "0x800", 00:03:48.451 "tpoint_mask": "0x0" 00:03:48.451 }, 00:03:48.451 "iaa": { 00:03:48.451 "mask": "0x1000", 00:03:48.451 "tpoint_mask": "0x0" 00:03:48.451 }, 00:03:48.451 "nvme_tcp": { 00:03:48.451 "mask": "0x2000", 00:03:48.451 "tpoint_mask": "0x0" 00:03:48.451 }, 00:03:48.451 "bdev_nvme": { 00:03:48.451 "mask": "0x4000", 00:03:48.451 "tpoint_mask": "0x0" 00:03:48.451 }, 00:03:48.451 "sock": { 00:03:48.451 "mask": "0x8000", 00:03:48.451 "tpoint_mask": "0x0" 00:03:48.451 } 00:03:48.451 }' 00:03:48.451 17:22:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:48.451 17:22:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:03:48.451 17:22:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:48.451 17:22:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:48.451 17:22:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:48.451 17:22:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:48.451 17:22:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:48.451 17:22:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:48.451 17:22:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:48.451 17:22:44 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:48.451 00:03:48.451 real 0m0.053s 00:03:48.451 user 0m0.032s 00:03:48.451 sys 0m0.002s 00:03:48.451 17:22:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.451 17:22:44 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:48.451 ************************************ 00:03:48.451 END TEST rpc_trace_cmd_test 00:03:48.451 ************************************ 00:03:48.451 17:22:44 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:48.451 17:22:44 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:48.451 17:22:44 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:48.451 17:22:44 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:48.451 17:22:44 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.451 17:22:44 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.451 17:22:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.451 ************************************ 00:03:48.451 START TEST rpc_daemon_integrity 00:03:48.451 ************************************ 00:03:48.451 17:22:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:03:48.451 17:22:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:48.451 17:22:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:48.451 17:22:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.451 17:22:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:48.451 17:22:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:48.451 17:22:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:48.451 17:22:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:48.452 17:22:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:48.452 17:22:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:48.452 17:22:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.452 17:22:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:48.452 17:22:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:48.452 17:22:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:48.452 17:22:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:48.452 17:22:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.452 17:22:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:48.452 17:22:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:48.452 { 00:03:48.452 "name": "Malloc2", 00:03:48.452 "aliases": [ 00:03:48.452 "d8f1d079-42ce-11ef-96ac-773515fba644" 00:03:48.452 ], 00:03:48.452 "product_name": "Malloc disk", 00:03:48.452 "block_size": 512, 00:03:48.452 "num_blocks": 16384, 00:03:48.452 "uuid": "d8f1d079-42ce-11ef-96ac-773515fba644", 00:03:48.452 "assigned_rate_limits": { 00:03:48.452 "rw_ios_per_sec": 0, 00:03:48.452 "rw_mbytes_per_sec": 0, 00:03:48.452 "r_mbytes_per_sec": 0, 00:03:48.452 "w_mbytes_per_sec": 0 00:03:48.452 }, 00:03:48.452 "claimed": false, 00:03:48.452 "zoned": false, 00:03:48.452 "supported_io_types": { 00:03:48.452 "read": true, 00:03:48.452 "write": true, 00:03:48.452 "unmap": true, 00:03:48.452 "flush": true, 00:03:48.452 "reset": true, 00:03:48.452 "nvme_admin": false, 00:03:48.452 "nvme_io": false, 00:03:48.452 "nvme_io_md": false, 00:03:48.452 "write_zeroes": true, 00:03:48.452 "zcopy": true, 00:03:48.452 "get_zone_info": false, 00:03:48.452 "zone_management": false, 00:03:48.452 "zone_append": false, 00:03:48.452 "compare": false, 00:03:48.452 "compare_and_write": false, 00:03:48.452 "abort": true, 00:03:48.452 "seek_hole": false, 00:03:48.452 "seek_data": false, 00:03:48.452 "copy": true, 00:03:48.452 "nvme_iov_md": false 00:03:48.452 }, 00:03:48.452 "memory_domains": [ 00:03:48.452 { 00:03:48.452 "dma_device_id": "system", 00:03:48.452 "dma_device_type": 1 00:03:48.452 }, 00:03:48.452 { 00:03:48.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:48.452 "dma_device_type": 2 00:03:48.452 } 00:03:48.452 ], 00:03:48.452 "driver_specific": {} 00:03:48.452 } 00:03:48.452 ]' 00:03:48.452 17:22:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:48.452 17:22:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:48.452 17:22:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:48.452 17:22:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:48.452 17:22:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.452 [2024-07-15 17:22:44.255716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:48.452 [2024-07-15 17:22:44.255763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:48.452 [2024-07-15 17:22:44.255791] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x280177237a00 00:03:48.452 [2024-07-15 17:22:44.255800] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:48.452 [2024-07-15 17:22:44.256484] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:48.452 [2024-07-15 17:22:44.256512] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:48.452 Passthru0 00:03:48.452 17:22:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:48.452 17:22:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:48.452 17:22:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:48.452 17:22:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.710 17:22:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:48.710 17:22:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:48.710 { 00:03:48.710 "name": "Malloc2", 00:03:48.710 "aliases": [ 00:03:48.710 "d8f1d079-42ce-11ef-96ac-773515fba644" 00:03:48.710 ], 00:03:48.710 "product_name": "Malloc disk", 00:03:48.710 "block_size": 512, 00:03:48.710 "num_blocks": 16384, 00:03:48.710 "uuid": "d8f1d079-42ce-11ef-96ac-773515fba644", 00:03:48.710 "assigned_rate_limits": { 00:03:48.710 "rw_ios_per_sec": 0, 00:03:48.710 "rw_mbytes_per_sec": 0, 00:03:48.710 "r_mbytes_per_sec": 0, 00:03:48.710 "w_mbytes_per_sec": 0 00:03:48.710 }, 00:03:48.710 "claimed": true, 00:03:48.710 "claim_type": "exclusive_write", 00:03:48.710 "zoned": false, 00:03:48.710 "supported_io_types": { 00:03:48.710 "read": true, 00:03:48.710 "write": true, 00:03:48.710 "unmap": true, 00:03:48.710 "flush": true, 00:03:48.710 "reset": true, 00:03:48.710 "nvme_admin": false, 00:03:48.710 "nvme_io": false, 00:03:48.710 "nvme_io_md": false, 00:03:48.710 "write_zeroes": true, 00:03:48.710 "zcopy": true, 00:03:48.710 "get_zone_info": false, 00:03:48.710 "zone_management": false, 00:03:48.710 "zone_append": false, 00:03:48.710 "compare": false, 00:03:48.710 "compare_and_write": false, 00:03:48.710 "abort": true, 00:03:48.710 "seek_hole": false, 00:03:48.710 "seek_data": false, 00:03:48.710 "copy": true, 00:03:48.710 "nvme_iov_md": false 00:03:48.710 }, 00:03:48.710 "memory_domains": [ 00:03:48.710 { 00:03:48.710 "dma_device_id": "system", 00:03:48.710 "dma_device_type": 1 00:03:48.710 }, 00:03:48.710 { 00:03:48.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:48.710 "dma_device_type": 2 00:03:48.710 } 00:03:48.710 ], 00:03:48.710 "driver_specific": {} 00:03:48.710 }, 00:03:48.710 { 00:03:48.710 "name": "Passthru0", 00:03:48.710 "aliases": [ 00:03:48.710 "12bcde9f-956a-5652-b62f-0a3027b358e8" 00:03:48.710 ], 00:03:48.710 "product_name": "passthru", 00:03:48.710 "block_size": 512, 00:03:48.710 "num_blocks": 16384, 00:03:48.710 "uuid": "12bcde9f-956a-5652-b62f-0a3027b358e8", 00:03:48.710 "assigned_rate_limits": { 00:03:48.710 "rw_ios_per_sec": 0, 00:03:48.711 "rw_mbytes_per_sec": 0, 00:03:48.711 "r_mbytes_per_sec": 0, 00:03:48.711 "w_mbytes_per_sec": 0 00:03:48.711 }, 00:03:48.711 "claimed": false, 00:03:48.711 "zoned": false, 00:03:48.711 "supported_io_types": { 00:03:48.711 "read": true, 00:03:48.711 "write": true, 00:03:48.711 "unmap": true, 00:03:48.711 "flush": true, 00:03:48.711 "reset": true, 00:03:48.711 "nvme_admin": false, 00:03:48.711 "nvme_io": false, 00:03:48.711 "nvme_io_md": false, 00:03:48.711 "write_zeroes": true, 00:03:48.711 "zcopy": true, 00:03:48.711 "get_zone_info": false, 00:03:48.711 "zone_management": false, 00:03:48.711 "zone_append": false, 00:03:48.711 "compare": false, 00:03:48.711 "compare_and_write": false, 00:03:48.711 "abort": true, 00:03:48.711 "seek_hole": false, 00:03:48.711 "seek_data": false, 00:03:48.711 "copy": true, 00:03:48.711 "nvme_iov_md": false 00:03:48.711 }, 00:03:48.711 "memory_domains": [ 00:03:48.711 { 00:03:48.711 "dma_device_id": "system", 00:03:48.711 "dma_device_type": 1 00:03:48.711 }, 00:03:48.711 { 00:03:48.711 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:48.711 "dma_device_type": 2 00:03:48.711 } 00:03:48.711 ], 00:03:48.711 "driver_specific": { 00:03:48.711 "passthru": { 00:03:48.711 "name": "Passthru0", 00:03:48.711 "base_bdev_name": "Malloc2" 00:03:48.711 } 00:03:48.711 } 00:03:48.711 } 00:03:48.711 ]' 00:03:48.711 17:22:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:48.711 17:22:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:48.711 17:22:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:48.711 17:22:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:48.711 17:22:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.711 17:22:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:48.711 17:22:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:48.711 17:22:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:48.711 17:22:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.711 17:22:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:48.711 17:22:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:48.711 17:22:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:48.711 17:22:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.711 17:22:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:48.711 17:22:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:48.711 17:22:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:48.711 17:22:44 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:48.711 00:03:48.711 real 0m0.134s 00:03:48.711 user 0m0.049s 00:03:48.711 sys 0m0.027s 00:03:48.711 17:22:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.711 17:22:44 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:48.711 ************************************ 00:03:48.711 END TEST rpc_daemon_integrity 00:03:48.711 ************************************ 00:03:48.711 17:22:44 rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:48.711 17:22:44 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:48.711 17:22:44 rpc -- rpc/rpc.sh@84 -- # killprocess 45483 00:03:48.711 17:22:44 rpc -- common/autotest_common.sh@948 -- # '[' -z 45483 ']' 00:03:48.711 17:22:44 rpc -- common/autotest_common.sh@952 -- # kill -0 45483 00:03:48.711 17:22:44 rpc -- common/autotest_common.sh@953 -- # uname 00:03:48.711 17:22:44 rpc -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:03:48.711 17:22:44 rpc -- common/autotest_common.sh@956 -- # ps -c -o command 45483 00:03:48.711 17:22:44 rpc -- common/autotest_common.sh@956 -- # tail -1 00:03:48.711 17:22:44 rpc -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:03:48.711 17:22:44 rpc -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:03:48.711 killing process with pid 45483 00:03:48.711 17:22:44 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 45483' 00:03:48.711 17:22:44 rpc -- common/autotest_common.sh@967 -- # kill 45483 00:03:48.711 17:22:44 rpc -- common/autotest_common.sh@972 -- # wait 45483 00:03:48.969 00:03:48.970 real 0m2.094s 00:03:48.970 user 0m2.137s 00:03:48.970 sys 0m0.987s 00:03:48.970 17:22:44 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:48.970 17:22:44 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:48.970 ************************************ 00:03:48.970 END TEST rpc 00:03:48.970 ************************************ 00:03:48.970 17:22:44 -- common/autotest_common.sh@1142 -- # return 0 00:03:48.970 17:22:44 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:48.970 17:22:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:48.970 17:22:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:48.970 17:22:44 -- common/autotest_common.sh@10 -- # set +x 00:03:48.970 ************************************ 00:03:48.970 START TEST skip_rpc 00:03:48.970 ************************************ 00:03:48.970 17:22:44 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:03:49.228 * Looking for test storage... 00:03:49.228 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:03:49.228 17:22:44 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:49.228 17:22:44 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:03:49.228 17:22:44 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:49.228 17:22:44 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:49.228 17:22:44 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:49.228 17:22:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:49.228 ************************************ 00:03:49.228 START TEST skip_rpc 00:03:49.228 ************************************ 00:03:49.228 17:22:44 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:03:49.228 17:22:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=45659 00:03:49.228 17:22:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:49.228 17:22:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:49.228 17:22:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:49.228 [2024-07-15 17:22:44.844571] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:03:49.228 [2024-07-15 17:22:44.844824] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:03:49.795 EAL: TSC is not safe to use in SMP mode 00:03:49.795 EAL: TSC is not invariant 00:03:49.795 [2024-07-15 17:22:45.417430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:49.795 [2024-07-15 17:22:45.507452] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:03:49.795 [2024-07-15 17:22:45.509735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.080 17:22:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:55.080 17:22:50 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:03:55.080 17:22:50 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:55.080 17:22:50 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:03:55.080 17:22:50 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:55.080 17:22:50 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:03:55.080 17:22:50 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:03:55.080 17:22:50 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:03:55.080 17:22:50 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:55.080 17:22:50 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.080 17:22:50 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:55.080 17:22:50 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:03:55.080 17:22:50 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:03:55.080 17:22:50 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:03:55.080 17:22:50 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:03:55.080 17:22:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:55.080 17:22:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 45659 00:03:55.080 17:22:50 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 45659 ']' 00:03:55.080 17:22:50 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 45659 00:03:55.080 17:22:50 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:03:55.080 17:22:50 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:03:55.080 17:22:50 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps -c -o command 45659 00:03:55.080 17:22:50 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # tail -1 00:03:55.080 17:22:50 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:03:55.080 17:22:50 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:03:55.081 17:22:50 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 45659' 00:03:55.081 killing process with pid 45659 00:03:55.081 17:22:50 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 45659 00:03:55.081 17:22:50 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 45659 00:03:55.081 00:03:55.081 real 0m5.589s 00:03:55.081 user 0m4.990s 00:03:55.081 sys 0m0.616s 00:03:55.081 ************************************ 00:03:55.081 END TEST skip_rpc 00:03:55.081 ************************************ 00:03:55.081 17:22:50 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:55.081 17:22:50 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.081 17:22:50 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:03:55.081 17:22:50 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:55.081 17:22:50 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.081 17:22:50 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.081 17:22:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.081 ************************************ 00:03:55.081 START TEST skip_rpc_with_json 00:03:55.081 ************************************ 00:03:55.081 17:22:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:03:55.081 17:22:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:55.081 17:22:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=45708 00:03:55.081 17:22:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:03:55.081 17:22:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:55.081 17:22:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 45708 00:03:55.081 17:22:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 45708 ']' 00:03:55.081 17:22:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:55.081 17:22:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:03:55.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:55.081 17:22:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:55.081 17:22:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:03:55.081 17:22:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:55.081 [2024-07-15 17:22:50.481046] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:03:55.081 [2024-07-15 17:22:50.481292] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:03:55.339 EAL: TSC is not safe to use in SMP mode 00:03:55.339 EAL: TSC is not invariant 00:03:55.339 [2024-07-15 17:22:51.018050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:55.339 [2024-07-15 17:22:51.108605] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:03:55.339 [2024-07-15 17:22:51.111011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.906 17:22:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:03:55.906 17:22:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:03:55.906 17:22:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:55.906 17:22:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:55.906 17:22:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:55.906 [2024-07-15 17:22:51.471141] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:55.906 request: 00:03:55.906 { 00:03:55.906 "trtype": "tcp", 00:03:55.906 "method": "nvmf_get_transports", 00:03:55.906 "req_id": 1 00:03:55.906 } 00:03:55.906 Got JSON-RPC error response 00:03:55.906 response: 00:03:55.906 { 00:03:55.906 "code": -19, 00:03:55.906 "message": "Operation not supported by device" 00:03:55.906 } 00:03:55.906 17:22:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:03:55.906 17:22:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:55.906 17:22:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:55.906 17:22:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:55.906 [2024-07-15 17:22:51.483160] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:55.906 17:22:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:55.906 17:22:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:55.906 17:22:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:03:55.906 17:22:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:55.906 17:22:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:03:55.906 17:22:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:03:55.906 { 00:03:55.906 "subsystems": [ 00:03:55.906 { 00:03:55.906 "subsystem": "vmd", 00:03:55.906 "config": [] 00:03:55.906 }, 00:03:55.906 { 00:03:55.906 "subsystem": "iobuf", 00:03:55.906 "config": [ 00:03:55.906 { 00:03:55.906 "method": "iobuf_set_options", 00:03:55.906 "params": { 00:03:55.906 "small_pool_count": 8192, 00:03:55.906 "large_pool_count": 1024, 00:03:55.906 "small_bufsize": 8192, 00:03:55.906 "large_bufsize": 135168 00:03:55.906 } 00:03:55.906 } 00:03:55.906 ] 00:03:55.906 }, 00:03:55.906 { 00:03:55.906 "subsystem": "scheduler", 00:03:55.906 "config": [ 00:03:55.906 { 00:03:55.906 "method": "framework_set_scheduler", 00:03:55.906 "params": { 00:03:55.906 "name": "static" 00:03:55.906 } 00:03:55.906 } 00:03:55.906 ] 00:03:55.906 }, 00:03:55.906 { 00:03:55.906 "subsystem": "sock", 00:03:55.906 "config": [ 00:03:55.906 { 00:03:55.906 "method": "sock_set_default_impl", 00:03:55.906 "params": { 00:03:55.906 "impl_name": "posix" 00:03:55.906 } 00:03:55.906 }, 00:03:55.906 { 00:03:55.906 "method": "sock_impl_set_options", 00:03:55.906 "params": { 00:03:55.906 "impl_name": "ssl", 00:03:55.906 "recv_buf_size": 4096, 00:03:55.906 "send_buf_size": 4096, 00:03:55.906 "enable_recv_pipe": true, 00:03:55.906 "enable_quickack": false, 00:03:55.906 "enable_placement_id": 0, 00:03:55.906 "enable_zerocopy_send_server": true, 00:03:55.906 "enable_zerocopy_send_client": false, 00:03:55.906 "zerocopy_threshold": 0, 00:03:55.906 "tls_version": 0, 00:03:55.906 "enable_ktls": false 00:03:55.906 } 00:03:55.906 }, 00:03:55.906 { 00:03:55.906 "method": "sock_impl_set_options", 00:03:55.906 "params": { 00:03:55.906 "impl_name": "posix", 00:03:55.906 "recv_buf_size": 2097152, 00:03:55.906 "send_buf_size": 2097152, 00:03:55.906 "enable_recv_pipe": true, 00:03:55.906 "enable_quickack": false, 00:03:55.906 "enable_placement_id": 0, 00:03:55.906 "enable_zerocopy_send_server": true, 00:03:55.906 "enable_zerocopy_send_client": false, 00:03:55.906 "zerocopy_threshold": 0, 00:03:55.906 "tls_version": 0, 00:03:55.906 "enable_ktls": false 00:03:55.906 } 00:03:55.906 } 00:03:55.906 ] 00:03:55.906 }, 00:03:55.906 { 00:03:55.906 "subsystem": "keyring", 00:03:55.906 "config": [] 00:03:55.906 }, 00:03:55.906 { 00:03:55.906 "subsystem": "accel", 00:03:55.906 "config": [ 00:03:55.906 { 00:03:55.906 "method": "accel_set_options", 00:03:55.906 "params": { 00:03:55.906 "small_cache_size": 128, 00:03:55.906 "large_cache_size": 16, 00:03:55.906 "task_count": 2048, 00:03:55.906 "sequence_count": 2048, 00:03:55.906 "buf_count": 2048 00:03:55.906 } 00:03:55.906 } 00:03:55.906 ] 00:03:55.906 }, 00:03:55.906 { 00:03:55.906 "subsystem": "bdev", 00:03:55.906 "config": [ 00:03:55.906 { 00:03:55.906 "method": "bdev_set_options", 00:03:55.906 "params": { 00:03:55.906 "bdev_io_pool_size": 65535, 00:03:55.906 "bdev_io_cache_size": 256, 00:03:55.906 "bdev_auto_examine": true, 00:03:55.906 "iobuf_small_cache_size": 128, 00:03:55.906 "iobuf_large_cache_size": 16 00:03:55.906 } 00:03:55.906 }, 00:03:55.906 { 00:03:55.906 "method": "bdev_raid_set_options", 00:03:55.906 "params": { 00:03:55.906 "process_window_size_kb": 1024 00:03:55.906 } 00:03:55.906 }, 00:03:55.906 { 00:03:55.906 "method": "bdev_nvme_set_options", 00:03:55.906 "params": { 00:03:55.906 "action_on_timeout": "none", 00:03:55.906 "timeout_us": 0, 00:03:55.906 "timeout_admin_us": 0, 00:03:55.906 "keep_alive_timeout_ms": 10000, 00:03:55.906 "arbitration_burst": 0, 00:03:55.906 "low_priority_weight": 0, 00:03:55.906 "medium_priority_weight": 0, 00:03:55.906 "high_priority_weight": 0, 00:03:55.906 "nvme_adminq_poll_period_us": 10000, 00:03:55.906 "nvme_ioq_poll_period_us": 0, 00:03:55.906 "io_queue_requests": 0, 00:03:55.906 "delay_cmd_submit": true, 00:03:55.906 "transport_retry_count": 4, 00:03:55.906 "bdev_retry_count": 3, 00:03:55.906 "transport_ack_timeout": 0, 00:03:55.906 "ctrlr_loss_timeout_sec": 0, 00:03:55.906 "reconnect_delay_sec": 0, 00:03:55.906 "fast_io_fail_timeout_sec": 0, 00:03:55.906 "disable_auto_failback": false, 00:03:55.906 "generate_uuids": false, 00:03:55.906 "transport_tos": 0, 00:03:55.906 "nvme_error_stat": false, 00:03:55.906 "rdma_srq_size": 0, 00:03:55.906 "io_path_stat": false, 00:03:55.906 "allow_accel_sequence": false, 00:03:55.906 "rdma_max_cq_size": 0, 00:03:55.906 "rdma_cm_event_timeout_ms": 0, 00:03:55.906 "dhchap_digests": [ 00:03:55.906 "sha256", 00:03:55.906 "sha384", 00:03:55.906 "sha512" 00:03:55.906 ], 00:03:55.906 "dhchap_dhgroups": [ 00:03:55.906 "null", 00:03:55.906 "ffdhe2048", 00:03:55.906 "ffdhe3072", 00:03:55.906 "ffdhe4096", 00:03:55.906 "ffdhe6144", 00:03:55.906 "ffdhe8192" 00:03:55.906 ] 00:03:55.906 } 00:03:55.906 }, 00:03:55.906 { 00:03:55.906 "method": "bdev_nvme_set_hotplug", 00:03:55.906 "params": { 00:03:55.906 "period_us": 100000, 00:03:55.906 "enable": false 00:03:55.906 } 00:03:55.906 }, 00:03:55.906 { 00:03:55.906 "method": "bdev_wait_for_examine" 00:03:55.906 } 00:03:55.906 ] 00:03:55.906 }, 00:03:55.906 { 00:03:55.906 "subsystem": "scsi", 00:03:55.906 "config": null 00:03:55.906 }, 00:03:55.906 { 00:03:55.906 "subsystem": "nvmf", 00:03:55.906 "config": [ 00:03:55.906 { 00:03:55.906 "method": "nvmf_set_config", 00:03:55.906 "params": { 00:03:55.906 "discovery_filter": "match_any", 00:03:55.906 "admin_cmd_passthru": { 00:03:55.906 "identify_ctrlr": false 00:03:55.906 } 00:03:55.906 } 00:03:55.906 }, 00:03:55.906 { 00:03:55.906 "method": "nvmf_set_max_subsystems", 00:03:55.906 "params": { 00:03:55.906 "max_subsystems": 1024 00:03:55.906 } 00:03:55.906 }, 00:03:55.906 { 00:03:55.906 "method": "nvmf_set_crdt", 00:03:55.906 "params": { 00:03:55.907 "crdt1": 0, 00:03:55.907 "crdt2": 0, 00:03:55.907 "crdt3": 0 00:03:55.907 } 00:03:55.907 }, 00:03:55.907 { 00:03:55.907 "method": "nvmf_create_transport", 00:03:55.907 "params": { 00:03:55.907 "trtype": "TCP", 00:03:55.907 "max_queue_depth": 128, 00:03:55.907 "max_io_qpairs_per_ctrlr": 127, 00:03:55.907 "in_capsule_data_size": 4096, 00:03:55.907 "max_io_size": 131072, 00:03:55.907 "io_unit_size": 131072, 00:03:55.907 "max_aq_depth": 128, 00:03:55.907 "num_shared_buffers": 511, 00:03:55.907 "buf_cache_size": 4294967295, 00:03:55.907 "dif_insert_or_strip": false, 00:03:55.907 "zcopy": false, 00:03:55.907 "c2h_success": true, 00:03:55.907 "sock_priority": 0, 00:03:55.907 "abort_timeout_sec": 1, 00:03:55.907 "ack_timeout": 0, 00:03:55.907 "data_wr_pool_size": 0 00:03:55.907 } 00:03:55.907 } 00:03:55.907 ] 00:03:55.907 }, 00:03:55.907 { 00:03:55.907 "subsystem": "iscsi", 00:03:55.907 "config": [ 00:03:55.907 { 00:03:55.907 "method": "iscsi_set_options", 00:03:55.907 "params": { 00:03:55.907 "node_base": "iqn.2016-06.io.spdk", 00:03:55.907 "max_sessions": 128, 00:03:55.907 "max_connections_per_session": 2, 00:03:55.907 "max_queue_depth": 64, 00:03:55.907 "default_time2wait": 2, 00:03:55.907 "default_time2retain": 20, 00:03:55.907 "first_burst_length": 8192, 00:03:55.907 "immediate_data": true, 00:03:55.907 "allow_duplicated_isid": false, 00:03:55.907 "error_recovery_level": 0, 00:03:55.907 "nop_timeout": 60, 00:03:55.907 "nop_in_interval": 30, 00:03:55.907 "disable_chap": false, 00:03:55.907 "require_chap": false, 00:03:55.907 "mutual_chap": false, 00:03:55.907 "chap_group": 0, 00:03:55.907 "max_large_datain_per_connection": 64, 00:03:55.907 "max_r2t_per_connection": 4, 00:03:55.907 "pdu_pool_size": 36864, 00:03:55.907 "immediate_data_pool_size": 16384, 00:03:55.907 "data_out_pool_size": 2048 00:03:55.907 } 00:03:55.907 } 00:03:55.907 ] 00:03:55.907 } 00:03:55.907 ] 00:03:55.907 } 00:03:55.907 17:22:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:55.907 17:22:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 45708 00:03:55.907 17:22:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 45708 ']' 00:03:55.907 17:22:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 45708 00:03:55.907 17:22:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:03:55.907 17:22:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:03:55.907 17:22:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps -c -o command 45708 00:03:55.907 17:22:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # tail -1 00:03:55.907 17:22:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:03:55.907 killing process with pid 45708 00:03:55.907 17:22:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:03:55.907 17:22:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 45708' 00:03:55.907 17:22:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 45708 00:03:55.907 17:22:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 45708 00:03:56.165 17:22:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=45722 00:03:56.165 17:22:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:56.165 17:22:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:01.433 17:22:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 45722 00:04:01.433 17:22:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 45722 ']' 00:04:01.433 17:22:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 45722 00:04:01.433 17:22:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:01.433 17:22:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:04:01.433 17:22:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps -c -o command 45722 00:04:01.433 17:22:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # tail -1 00:04:01.433 17:22:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:04:01.433 17:22:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:04:01.433 killing process with pid 45722 00:04:01.433 17:22:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 45722' 00:04:01.433 17:22:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 45722 00:04:01.433 17:22:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 45722 00:04:01.433 17:22:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:01.433 17:22:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:01.433 00:04:01.433 real 0m6.783s 00:04:01.433 user 0m6.110s 00:04:01.433 sys 0m1.221s 00:04:01.433 17:22:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.433 17:22:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:01.433 ************************************ 00:04:01.433 END TEST skip_rpc_with_json 00:04:01.433 ************************************ 00:04:01.690 17:22:57 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:01.690 17:22:57 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:01.690 17:22:57 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.690 17:22:57 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.690 17:22:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.690 ************************************ 00:04:01.690 START TEST skip_rpc_with_delay 00:04:01.690 ************************************ 00:04:01.690 17:22:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:01.690 17:22:57 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:01.690 17:22:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:01.690 17:22:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:01.690 17:22:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:01.690 17:22:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:01.691 17:22:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:01.691 17:22:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:01.691 17:22:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:01.691 17:22:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:01.691 17:22:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:01.691 17:22:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:01.691 17:22:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:01.691 [2024-07-15 17:22:57.312429] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:01.691 [2024-07-15 17:22:57.312753] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:01.691 17:22:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:01.691 17:22:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:01.691 17:22:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:01.691 17:22:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:01.691 00:04:01.691 real 0m0.013s 00:04:01.691 user 0m0.004s 00:04:01.691 sys 0m0.001s 00:04:01.691 ************************************ 00:04:01.691 END TEST skip_rpc_with_delay 00:04:01.691 ************************************ 00:04:01.691 17:22:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.691 17:22:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:01.691 17:22:57 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:01.691 17:22:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:01.691 17:22:57 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' FreeBSD '!=' FreeBSD ']' 00:04:01.691 17:22:57 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:01.691 00:04:01.691 real 0m12.662s 00:04:01.691 user 0m11.241s 00:04:01.691 sys 0m2.021s 00:04:01.691 17:22:57 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.691 17:22:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:01.691 ************************************ 00:04:01.691 END TEST skip_rpc 00:04:01.691 ************************************ 00:04:01.691 17:22:57 -- common/autotest_common.sh@1142 -- # return 0 00:04:01.691 17:22:57 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:01.691 17:22:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.691 17:22:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.691 17:22:57 -- common/autotest_common.sh@10 -- # set +x 00:04:01.691 ************************************ 00:04:01.691 START TEST rpc_client 00:04:01.691 ************************************ 00:04:01.691 17:22:57 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:01.948 * Looking for test storage... 00:04:01.948 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:01.948 17:22:57 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:01.948 OK 00:04:01.948 17:22:57 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:01.948 00:04:01.948 real 0m0.160s 00:04:01.948 user 0m0.090s 00:04:01.948 sys 0m0.141s 00:04:01.948 17:22:57 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:01.948 17:22:57 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:01.948 ************************************ 00:04:01.948 END TEST rpc_client 00:04:01.948 ************************************ 00:04:01.948 17:22:57 -- common/autotest_common.sh@1142 -- # return 0 00:04:01.948 17:22:57 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:01.948 17:22:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:01.948 17:22:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:01.948 17:22:57 -- common/autotest_common.sh@10 -- # set +x 00:04:01.948 ************************************ 00:04:01.948 START TEST json_config 00:04:01.948 ************************************ 00:04:01.948 17:22:57 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:01.948 17:22:57 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:01.948 17:22:57 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:01.948 17:22:57 json_config -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:04:01.948 17:22:57 json_config -- nvmf/common.sh@7 -- # return 0 00:04:01.948 17:22:57 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:01.948 17:22:57 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:01.948 17:22:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:01.948 17:22:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:01.948 17:22:57 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:01.948 17:22:57 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:01.948 17:22:57 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:01.948 17:22:57 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:01.948 17:22:57 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:01.948 17:22:57 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:01.948 17:22:57 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:01.948 17:22:57 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:01.948 17:22:57 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:01.948 17:22:57 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:01.948 17:22:57 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:01.948 INFO: JSON configuration test init 00:04:01.948 17:22:57 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:01.948 17:22:57 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:01.948 17:22:57 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:01.948 17:22:57 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:01.948 17:22:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.948 17:22:57 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:01.948 17:22:57 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:01.948 17:22:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.948 17:22:57 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:01.948 17:22:57 json_config -- json_config/common.sh@9 -- # local app=target 00:04:01.948 17:22:57 json_config -- json_config/common.sh@10 -- # shift 00:04:01.948 17:22:57 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:01.948 17:22:57 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:01.948 17:22:57 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:01.948 17:22:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:01.948 17:22:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:01.948 17:22:57 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=45881 00:04:01.948 17:22:57 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:01.948 Waiting for target to run... 00:04:01.948 17:22:57 json_config -- json_config/common.sh@25 -- # waitforlisten 45881 /var/tmp/spdk_tgt.sock 00:04:01.948 17:22:57 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:01.948 17:22:57 json_config -- common/autotest_common.sh@829 -- # '[' -z 45881 ']' 00:04:01.948 17:22:57 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:01.948 17:22:57 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:01.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:01.948 17:22:57 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:01.948 17:22:57 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:01.948 17:22:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:01.948 [2024-07-15 17:22:57.761822] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:04:01.948 [2024-07-15 17:22:57.762082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:02.512 EAL: TSC is not safe to use in SMP mode 00:04:02.512 EAL: TSC is not invariant 00:04:02.512 [2024-07-15 17:22:58.054718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.512 [2024-07-15 17:22:58.149821] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:02.512 [2024-07-15 17:22:58.152061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:03.077 17:22:58 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:03.077 17:22:58 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:03.077 00:04:03.077 17:22:58 json_config -- json_config/common.sh@26 -- # echo '' 00:04:03.077 17:22:58 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:03.077 17:22:58 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:03.077 17:22:58 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:03.077 17:22:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.077 17:22:58 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:03.077 17:22:58 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:03.077 17:22:58 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:03.077 17:22:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.077 17:22:58 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:03.077 17:22:58 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:03.077 17:22:58 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:03.641 [2024-07-15 17:22:59.242381] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:04:03.641 17:22:59 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:03.641 17:22:59 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:03.641 17:22:59 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:03.641 17:22:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.641 17:22:59 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:03.641 17:22:59 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:03.641 17:22:59 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:03.641 17:22:59 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:03.641 17:22:59 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:03.641 17:22:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:03.898 17:22:59 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:03.898 17:22:59 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:03.898 17:22:59 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:03.898 17:22:59 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:03.898 17:22:59 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:03.898 17:22:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.898 17:22:59 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:03.898 17:22:59 json_config -- json_config/json_config.sh@278 -- # [[ 1 -eq 1 ]] 00:04:03.898 17:22:59 json_config -- json_config/json_config.sh@279 -- # create_bdev_subsystem_config 00:04:03.898 17:22:59 json_config -- json_config/json_config.sh@105 -- # timing_enter create_bdev_subsystem_config 00:04:03.898 17:22:59 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:03.898 17:22:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:03.898 17:22:59 json_config -- json_config/json_config.sh@107 -- # expected_notifications=() 00:04:03.898 17:22:59 json_config -- json_config/json_config.sh@107 -- # local expected_notifications 00:04:03.898 17:22:59 json_config -- json_config/json_config.sh@111 -- # expected_notifications+=($(get_notifications)) 00:04:03.898 17:22:59 json_config -- json_config/json_config.sh@111 -- # get_notifications 00:04:03.898 17:22:59 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:04:03.898 17:22:59 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:03.898 17:22:59 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:03.898 17:22:59 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:04:03.898 17:22:59 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:04:03.898 17:22:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:04:04.170 17:22:59 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:04:04.170 17:22:59 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:04.170 17:22:59 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:04.170 17:22:59 json_config -- json_config/json_config.sh@113 -- # [[ 1 -eq 1 ]] 00:04:04.170 17:22:59 json_config -- json_config/json_config.sh@114 -- # local lvol_store_base_bdev=Nvme0n1 00:04:04.170 17:22:59 json_config -- json_config/json_config.sh@116 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:04:04.170 17:22:59 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:04:04.427 Nvme0n1p0 Nvme0n1p1 00:04:04.427 17:23:00 json_config -- json_config/json_config.sh@117 -- # tgt_rpc bdev_split_create Malloc0 3 00:04:04.428 17:23:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:04:04.685 [2024-07-15 17:23:00.454632] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:04:04.685 [2024-07-15 17:23:00.454700] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:04:04.685 00:04:04.685 17:23:00 json_config -- json_config/json_config.sh@118 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:04:04.685 17:23:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:04:04.943 Malloc3 00:04:05.202 17:23:00 json_config -- json_config/json_config.sh@119 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:04:05.202 17:23:00 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:04:05.460 [2024-07-15 17:23:01.034704] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:05.460 [2024-07-15 17:23:01.034766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:05.460 [2024-07-15 17:23:01.034795] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1dcd12838180 00:04:05.460 [2024-07-15 17:23:01.034804] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:05.460 [2024-07-15 17:23:01.035459] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:05.460 [2024-07-15 17:23:01.035486] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:04:05.460 PTBdevFromMalloc3 00:04:05.460 17:23:01 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_null_create Null0 32 512 00:04:05.460 17:23:01 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:04:05.718 Null0 00:04:05.718 17:23:01 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:04:05.718 17:23:01 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:04:05.718 Malloc0 00:04:05.718 17:23:01 json_config -- json_config/json_config.sh@124 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:04:05.718 17:23:01 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:04:06.284 Malloc1 00:04:06.284 17:23:01 json_config -- json_config/json_config.sh@137 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:04:06.284 17:23:01 json_config -- json_config/json_config.sh@140 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:04:06.543 102400+0 records in 00:04:06.543 102400+0 records out 00:04:06.543 104857600 bytes transferred in 0.277661 secs (377645922 bytes/sec) 00:04:06.543 17:23:02 json_config -- json_config/json_config.sh@141 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:04:06.543 17:23:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:04:06.543 aio_disk 00:04:06.543 17:23:02 json_config -- json_config/json_config.sh@142 -- # expected_notifications+=(bdev_register:aio_disk) 00:04:06.543 17:23:02 json_config -- json_config/json_config.sh@147 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:04:06.543 17:23:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:04:07.109 e3ed9398-42ce-11ef-96ac-773515fba644 00:04:07.109 17:23:02 json_config -- json_config/json_config.sh@154 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:04:07.109 17:23:02 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:04:07.109 17:23:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:04:07.109 17:23:02 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:04:07.109 17:23:02 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:04:07.367 17:23:03 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:04:07.367 17:23:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:04:07.929 17:23:03 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:04:07.929 17:23:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:04:07.929 17:23:03 json_config -- json_config/json_config.sh@157 -- # [[ 0 -eq 1 ]] 00:04:07.929 17:23:03 json_config -- json_config/json_config.sh@172 -- # [[ 0 -eq 1 ]] 00:04:07.929 17:23:03 json_config -- json_config/json_config.sh@178 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:e415411c-42ce-11ef-96ac-773515fba644 bdev_register:e43b1b5a-42ce-11ef-96ac-773515fba644 bdev_register:e46847d0-42ce-11ef-96ac-773515fba644 bdev_register:e492671d-42ce-11ef-96ac-773515fba644 00:04:07.929 17:23:03 json_config -- json_config/json_config.sh@67 -- # local events_to_check 00:04:07.929 17:23:03 json_config -- json_config/json_config.sh@68 -- # local recorded_events 00:04:07.929 17:23:03 json_config -- json_config/json_config.sh@71 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:04:07.929 17:23:03 json_config -- json_config/json_config.sh@71 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:e415411c-42ce-11ef-96ac-773515fba644 bdev_register:e43b1b5a-42ce-11ef-96ac-773515fba644 bdev_register:e46847d0-42ce-11ef-96ac-773515fba644 bdev_register:e492671d-42ce-11ef-96ac-773515fba644 00:04:07.929 17:23:03 json_config -- json_config/json_config.sh@71 -- # sort 00:04:07.929 17:23:03 json_config -- json_config/json_config.sh@72 -- # recorded_events=($(get_notifications | sort)) 00:04:07.929 17:23:03 json_config -- json_config/json_config.sh@72 -- # get_notifications 00:04:07.929 17:23:03 json_config -- json_config/json_config.sh@72 -- # sort 00:04:07.929 17:23:03 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:04:07.929 17:23:03 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:07.929 17:23:03 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:07.929 17:23:03 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:04:07.929 17:23:03 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:04:07.929 17:23:03 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:04:08.187 17:23:03 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:04:08.187 17:23:03 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p1 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p0 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc3 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:PTBdevFromMalloc3 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Null0 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p2 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p1 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p0 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc1 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:aio_disk 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:e415411c-42ce-11ef-96ac-773515fba644 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:e43b1b5a-42ce-11ef-96ac-773515fba644 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:e46847d0-42ce-11ef-96ac-773515fba644 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:e492671d-42ce-11ef-96ac-773515fba644 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@74 -- # [[ bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:e415411c-42ce-11ef-96ac-773515fba644 bdev_register:e43b1b5a-42ce-11ef-96ac-773515fba644 bdev_register:e46847d0-42ce-11ef-96ac-773515fba644 bdev_register:e492671d-42ce-11ef-96ac-773515fba644 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\e\4\1\5\4\1\1\c\-\4\2\c\e\-\1\1\e\f\-\9\6\a\c\-\7\7\3\5\1\5\f\b\a\6\4\4\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\e\4\3\b\1\b\5\a\-\4\2\c\e\-\1\1\e\f\-\9\6\a\c\-\7\7\3\5\1\5\f\b\a\6\4\4\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\e\4\6\8\4\7\d\0\-\4\2\c\e\-\1\1\e\f\-\9\6\a\c\-\7\7\3\5\1\5\f\b\a\6\4\4\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\e\4\9\2\6\7\1\d\-\4\2\c\e\-\1\1\e\f\-\9\6\a\c\-\7\7\3\5\1\5\f\b\a\6\4\4 ]] 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@86 -- # cat 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@86 -- # printf ' %s\n' bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk bdev_register:e415411c-42ce-11ef-96ac-773515fba644 bdev_register:e43b1b5a-42ce-11ef-96ac-773515fba644 bdev_register:e46847d0-42ce-11ef-96ac-773515fba644 bdev_register:e492671d-42ce-11ef-96ac-773515fba644 00:04:08.187 Expected events matched: 00:04:08.187 bdev_register:Malloc0 00:04:08.187 bdev_register:Malloc0p0 00:04:08.187 bdev_register:Malloc0p1 00:04:08.187 bdev_register:Malloc0p2 00:04:08.187 bdev_register:Malloc1 00:04:08.187 bdev_register:Malloc3 00:04:08.187 bdev_register:Null0 00:04:08.187 bdev_register:Nvme0n1 00:04:08.187 bdev_register:Nvme0n1p0 00:04:08.187 bdev_register:Nvme0n1p1 00:04:08.187 bdev_register:PTBdevFromMalloc3 00:04:08.187 bdev_register:aio_disk 00:04:08.187 bdev_register:e415411c-42ce-11ef-96ac-773515fba644 00:04:08.187 bdev_register:e43b1b5a-42ce-11ef-96ac-773515fba644 00:04:08.187 bdev_register:e46847d0-42ce-11ef-96ac-773515fba644 00:04:08.187 bdev_register:e492671d-42ce-11ef-96ac-773515fba644 00:04:08.187 17:23:04 json_config -- json_config/json_config.sh@180 -- # timing_exit create_bdev_subsystem_config 00:04:08.187 17:23:04 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:08.187 17:23:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.445 17:23:04 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:08.445 17:23:04 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:08.445 17:23:04 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:04:08.445 17:23:04 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:08.445 17:23:04 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:08.445 17:23:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.445 17:23:04 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:08.445 17:23:04 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:08.445 17:23:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:08.703 MallocBdevForConfigChangeCheck 00:04:08.703 17:23:04 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:08.703 17:23:04 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:08.703 17:23:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:08.703 17:23:04 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:08.703 17:23:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:08.960 INFO: shutting down applications... 00:04:08.960 17:23:04 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:08.960 17:23:04 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:08.960 17:23:04 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:08.960 17:23:04 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:08.960 17:23:04 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:09.218 [2024-07-15 17:23:04.867270] vbdev_lvol.c: 151:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:04:09.218 Calling clear_iscsi_subsystem 00:04:09.218 Calling clear_nvmf_subsystem 00:04:09.218 Calling clear_bdev_subsystem 00:04:09.218 17:23:05 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:09.218 17:23:05 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:09.218 17:23:05 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:09.218 17:23:05 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:09.218 17:23:05 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:09.218 17:23:05 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:09.783 17:23:05 json_config -- json_config/json_config.sh@345 -- # break 00:04:09.783 17:23:05 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:09.783 17:23:05 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:09.783 17:23:05 json_config -- json_config/common.sh@31 -- # local app=target 00:04:09.783 17:23:05 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:09.783 17:23:05 json_config -- json_config/common.sh@35 -- # [[ -n 45881 ]] 00:04:09.783 17:23:05 json_config -- json_config/common.sh@38 -- # kill -SIGINT 45881 00:04:09.783 17:23:05 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:09.783 17:23:05 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:09.783 17:23:05 json_config -- json_config/common.sh@41 -- # kill -0 45881 00:04:09.783 17:23:05 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:10.371 17:23:05 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:10.371 17:23:05 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:10.371 17:23:05 json_config -- json_config/common.sh@41 -- # kill -0 45881 00:04:10.371 17:23:05 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:10.371 17:23:05 json_config -- json_config/common.sh@43 -- # break 00:04:10.371 SPDK target shutdown done 00:04:10.371 17:23:05 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:10.371 17:23:05 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:10.371 INFO: relaunching applications... 00:04:10.371 17:23:05 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:10.371 17:23:05 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:10.371 17:23:05 json_config -- json_config/common.sh@9 -- # local app=target 00:04:10.371 17:23:05 json_config -- json_config/common.sh@10 -- # shift 00:04:10.371 17:23:05 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:10.371 17:23:05 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:10.371 17:23:05 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:10.371 17:23:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:10.372 17:23:05 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:10.372 17:23:05 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=46067 00:04:10.372 Waiting for target to run... 00:04:10.372 17:23:05 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:10.372 17:23:05 json_config -- json_config/common.sh@25 -- # waitforlisten 46067 /var/tmp/spdk_tgt.sock 00:04:10.372 17:23:05 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:10.372 17:23:05 json_config -- common/autotest_common.sh@829 -- # '[' -z 46067 ']' 00:04:10.372 17:23:05 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:10.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:10.372 17:23:05 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:10.372 17:23:05 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:10.372 17:23:05 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:10.372 17:23:05 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:10.372 [2024-07-15 17:23:05.953603] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:04:10.372 [2024-07-15 17:23:05.953769] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:10.629 EAL: TSC is not safe to use in SMP mode 00:04:10.629 EAL: TSC is not invariant 00:04:10.629 [2024-07-15 17:23:06.224993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:10.629 [2024-07-15 17:23:06.312432] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:10.629 [2024-07-15 17:23:06.314676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.629 [2024-07-15 17:23:06.458719] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:04:10.629 [2024-07-15 17:23:06.458795] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:04:10.887 [2024-07-15 17:23:06.466705] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:04:10.887 [2024-07-15 17:23:06.466744] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:04:10.887 [2024-07-15 17:23:06.474720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:10.887 [2024-07-15 17:23:06.474745] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:04:10.887 [2024-07-15 17:23:06.474753] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:04:10.887 [2024-07-15 17:23:06.482720] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:04:10.887 [2024-07-15 17:23:06.551602] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:10.887 [2024-07-15 17:23:06.551639] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:10.887 [2024-07-15 17:23:06.551651] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3f397037780 00:04:10.887 [2024-07-15 17:23:06.551659] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:10.887 [2024-07-15 17:23:06.551727] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:10.887 [2024-07-15 17:23:06.551738] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:04:11.145 17:23:06 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:11.145 17:23:06 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:11.145 00:04:11.145 INFO: Checking if target configuration is the same... 00:04:11.145 17:23:06 json_config -- json_config/common.sh@26 -- # echo '' 00:04:11.145 17:23:06 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:11.145 17:23:06 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:11.145 17:23:06 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /tmp//sh-np.qfAZGx /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:11.402 + '[' 2 -ne 2 ']' 00:04:11.402 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:11.402 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:11.402 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:11.402 +++ basename /tmp//sh-np.qfAZGx 00:04:11.402 ++ mktemp /tmp/sh-np.qfAZGx.XXX 00:04:11.402 + tmp_file_1=/tmp/sh-np.qfAZGx.zuB 00:04:11.402 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:11.402 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:11.402 + tmp_file_2=/tmp/spdk_tgt_config.json.1Ci 00:04:11.402 + ret=0 00:04:11.402 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:11.402 17:23:06 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:11.402 17:23:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:11.660 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:11.660 + diff -u /tmp/sh-np.qfAZGx.zuB /tmp/spdk_tgt_config.json.1Ci 00:04:11.660 INFO: JSON config files are the same 00:04:11.660 + echo 'INFO: JSON config files are the same' 00:04:11.660 + rm /tmp/sh-np.qfAZGx.zuB /tmp/spdk_tgt_config.json.1Ci 00:04:11.660 + exit 0 00:04:11.660 17:23:07 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:11.660 INFO: changing configuration and checking if this can be detected... 00:04:11.660 17:23:07 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:11.660 17:23:07 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:11.660 17:23:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:11.919 17:23:07 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /tmp//sh-np.IewaZM /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:11.919 + '[' 2 -ne 2 ']' 00:04:11.919 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:11.919 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:11.919 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:11.919 +++ basename /tmp//sh-np.IewaZM 00:04:11.919 ++ mktemp /tmp/sh-np.IewaZM.XXX 00:04:11.919 + tmp_file_1=/tmp/sh-np.IewaZM.QBy 00:04:11.919 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:11.919 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:11.919 + tmp_file_2=/tmp/spdk_tgt_config.json.e7x 00:04:11.919 + ret=0 00:04:11.919 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:11.919 17:23:07 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:11.919 17:23:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:12.486 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:12.486 + diff -u /tmp/sh-np.IewaZM.QBy /tmp/spdk_tgt_config.json.e7x 00:04:12.486 + ret=1 00:04:12.486 + echo '=== Start of file: /tmp/sh-np.IewaZM.QBy ===' 00:04:12.486 + cat /tmp/sh-np.IewaZM.QBy 00:04:12.486 + echo '=== End of file: /tmp/sh-np.IewaZM.QBy ===' 00:04:12.486 + echo '' 00:04:12.486 + echo '=== Start of file: /tmp/spdk_tgt_config.json.e7x ===' 00:04:12.486 + cat /tmp/spdk_tgt_config.json.e7x 00:04:12.486 + echo '=== End of file: /tmp/spdk_tgt_config.json.e7x ===' 00:04:12.486 + echo '' 00:04:12.486 + rm /tmp/sh-np.IewaZM.QBy /tmp/spdk_tgt_config.json.e7x 00:04:12.486 + exit 1 00:04:12.486 INFO: configuration change detected. 00:04:12.486 17:23:08 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:12.486 17:23:08 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:12.486 17:23:08 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:12.486 17:23:08 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:12.486 17:23:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.486 17:23:08 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:12.486 17:23:08 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:12.486 17:23:08 json_config -- json_config/json_config.sh@317 -- # [[ -n 46067 ]] 00:04:12.486 17:23:08 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:12.486 17:23:08 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:12.486 17:23:08 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:12.486 17:23:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:12.486 17:23:08 json_config -- json_config/json_config.sh@186 -- # [[ 1 -eq 1 ]] 00:04:12.486 17:23:08 json_config -- json_config/json_config.sh@187 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:04:12.486 17:23:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:04:12.744 17:23:08 json_config -- json_config/json_config.sh@188 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:04:12.744 17:23:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:04:13.003 17:23:08 json_config -- json_config/json_config.sh@189 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:04:13.003 17:23:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:04:13.263 17:23:08 json_config -- json_config/json_config.sh@190 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:04:13.263 17:23:08 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:04:13.521 17:23:09 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:13.521 17:23:09 json_config -- json_config/json_config.sh@193 -- # [[ FreeBSD = Linux ]] 00:04:13.521 17:23:09 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:13.521 17:23:09 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:13.521 17:23:09 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:13.521 17:23:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.521 17:23:09 json_config -- json_config/json_config.sh@323 -- # killprocess 46067 00:04:13.521 17:23:09 json_config -- common/autotest_common.sh@948 -- # '[' -z 46067 ']' 00:04:13.521 17:23:09 json_config -- common/autotest_common.sh@952 -- # kill -0 46067 00:04:13.521 17:23:09 json_config -- common/autotest_common.sh@953 -- # uname 00:04:13.521 17:23:09 json_config -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:04:13.521 17:23:09 json_config -- common/autotest_common.sh@956 -- # ps -c -o command 46067 00:04:13.521 17:23:09 json_config -- common/autotest_common.sh@956 -- # tail -1 00:04:13.521 17:23:09 json_config -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:04:13.521 17:23:09 json_config -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:04:13.521 killing process with pid 46067 00:04:13.521 17:23:09 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46067' 00:04:13.521 17:23:09 json_config -- common/autotest_common.sh@967 -- # kill 46067 00:04:13.521 17:23:09 json_config -- common/autotest_common.sh@972 -- # wait 46067 00:04:13.779 17:23:09 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:13.779 17:23:09 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:13.779 17:23:09 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:13.779 17:23:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.779 17:23:09 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:13.779 INFO: Success 00:04:13.779 17:23:09 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:13.779 00:04:13.779 real 0m11.865s 00:04:13.779 user 0m18.615s 00:04:13.779 sys 0m2.073s 00:04:13.779 17:23:09 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.779 ************************************ 00:04:13.779 END TEST json_config 00:04:13.779 ************************************ 00:04:13.779 17:23:09 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:13.779 17:23:09 -- common/autotest_common.sh@1142 -- # return 0 00:04:13.779 17:23:09 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:13.779 17:23:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.779 17:23:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.779 17:23:09 -- common/autotest_common.sh@10 -- # set +x 00:04:13.779 ************************************ 00:04:13.779 START TEST json_config_extra_key 00:04:13.779 ************************************ 00:04:13.779 17:23:09 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:14.038 17:23:09 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:14.038 17:23:09 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:14.038 17:23:09 json_config_extra_key -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:04:14.038 17:23:09 json_config_extra_key -- nvmf/common.sh@7 -- # return 0 00:04:14.038 17:23:09 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:14.038 17:23:09 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:14.038 17:23:09 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:14.038 17:23:09 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:14.038 17:23:09 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:14.038 17:23:09 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:14.038 17:23:09 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:14.038 17:23:09 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:14.038 17:23:09 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:14.038 17:23:09 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:14.038 17:23:09 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:14.038 INFO: launching applications... 00:04:14.038 17:23:09 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:14.038 17:23:09 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:14.038 17:23:09 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:14.038 17:23:09 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:14.038 17:23:09 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:14.038 17:23:09 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:14.038 17:23:09 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:14.038 17:23:09 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:14.038 17:23:09 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=46200 00:04:14.038 Waiting for target to run... 00:04:14.038 17:23:09 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:14.038 17:23:09 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 46200 /var/tmp/spdk_tgt.sock 00:04:14.038 17:23:09 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 46200 ']' 00:04:14.038 17:23:09 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:14.038 17:23:09 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:14.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:14.038 17:23:09 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:14.038 17:23:09 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:14.039 17:23:09 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:14.039 17:23:09 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:14.039 [2024-07-15 17:23:09.633676] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:04:14.039 [2024-07-15 17:23:09.633950] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:14.297 EAL: TSC is not safe to use in SMP mode 00:04:14.297 EAL: TSC is not invariant 00:04:14.297 [2024-07-15 17:23:09.908118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.297 [2024-07-15 17:23:10.003322] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:14.297 [2024-07-15 17:23:10.005877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.863 17:23:10 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:14.863 00:04:14.863 17:23:10 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:14.863 17:23:10 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:14.863 INFO: shutting down applications... 00:04:14.863 17:23:10 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:14.863 17:23:10 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:14.863 17:23:10 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:14.863 17:23:10 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:14.863 17:23:10 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 46200 ]] 00:04:14.863 17:23:10 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 46200 00:04:14.863 17:23:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:14.863 17:23:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:14.863 17:23:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 46200 00:04:14.863 17:23:10 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:15.429 17:23:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:15.429 17:23:11 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:15.429 17:23:11 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 46200 00:04:15.429 17:23:11 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:15.429 17:23:11 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:15.429 17:23:11 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:15.429 SPDK target shutdown done 00:04:15.429 17:23:11 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:15.429 Success 00:04:15.429 17:23:11 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:15.429 00:04:15.429 real 0m1.669s 00:04:15.429 user 0m1.521s 00:04:15.429 sys 0m0.400s 00:04:15.429 17:23:11 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.429 17:23:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:15.429 ************************************ 00:04:15.429 END TEST json_config_extra_key 00:04:15.429 ************************************ 00:04:15.429 17:23:11 -- common/autotest_common.sh@1142 -- # return 0 00:04:15.429 17:23:11 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:15.429 17:23:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.429 17:23:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.429 17:23:11 -- common/autotest_common.sh@10 -- # set +x 00:04:15.429 ************************************ 00:04:15.429 START TEST alias_rpc 00:04:15.429 ************************************ 00:04:15.429 17:23:11 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:15.687 * Looking for test storage... 00:04:15.687 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:15.687 17:23:11 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:15.687 17:23:11 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=46258 00:04:15.687 17:23:11 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 46258 00:04:15.687 17:23:11 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 46258 ']' 00:04:15.687 17:23:11 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.687 17:23:11 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:15.687 17:23:11 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:15.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.687 17:23:11 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.687 17:23:11 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:15.687 17:23:11 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.687 [2024-07-15 17:23:11.379155] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:04:15.687 [2024-07-15 17:23:11.379345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:16.253 EAL: TSC is not safe to use in SMP mode 00:04:16.253 EAL: TSC is not invariant 00:04:16.253 [2024-07-15 17:23:11.911702] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.253 [2024-07-15 17:23:11.995819] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:16.253 [2024-07-15 17:23:11.997918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.816 17:23:12 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:16.816 17:23:12 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:16.816 17:23:12 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:17.072 17:23:12 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 46258 00:04:17.072 17:23:12 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 46258 ']' 00:04:17.072 17:23:12 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 46258 00:04:17.072 17:23:12 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:17.072 17:23:12 alias_rpc -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:04:17.072 17:23:12 alias_rpc -- common/autotest_common.sh@956 -- # ps -c -o command 46258 00:04:17.072 17:23:12 alias_rpc -- common/autotest_common.sh@956 -- # tail -1 00:04:17.072 17:23:12 alias_rpc -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:04:17.072 17:23:12 alias_rpc -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:04:17.072 17:23:12 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46258' 00:04:17.072 killing process with pid 46258 00:04:17.072 17:23:12 alias_rpc -- common/autotest_common.sh@967 -- # kill 46258 00:04:17.072 17:23:12 alias_rpc -- common/autotest_common.sh@972 -- # wait 46258 00:04:17.330 00:04:17.330 real 0m1.767s 00:04:17.330 user 0m1.846s 00:04:17.330 sys 0m0.786s 00:04:17.330 17:23:12 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.330 17:23:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.330 ************************************ 00:04:17.330 END TEST alias_rpc 00:04:17.330 ************************************ 00:04:17.330 17:23:13 -- common/autotest_common.sh@1142 -- # return 0 00:04:17.330 17:23:13 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:17.330 17:23:13 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:17.330 17:23:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.330 17:23:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.330 17:23:13 -- common/autotest_common.sh@10 -- # set +x 00:04:17.330 ************************************ 00:04:17.330 START TEST spdkcli_tcp 00:04:17.330 ************************************ 00:04:17.330 17:23:13 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:17.588 * Looking for test storage... 00:04:17.588 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:17.588 17:23:13 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:17.588 17:23:13 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:17.588 17:23:13 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:17.588 17:23:13 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:17.588 17:23:13 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:17.588 17:23:13 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:17.588 17:23:13 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:17.588 17:23:13 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:17.588 17:23:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:17.588 17:23:13 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=46319 00:04:17.588 17:23:13 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 46319 00:04:17.588 17:23:13 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:17.588 17:23:13 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 46319 ']' 00:04:17.588 17:23:13 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:17.588 17:23:13 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:17.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:17.588 17:23:13 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:17.588 17:23:13 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:17.588 17:23:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:17.588 [2024-07-15 17:23:13.181400] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:04:17.588 [2024-07-15 17:23:13.181690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:18.153 EAL: TSC is not safe to use in SMP mode 00:04:18.153 EAL: TSC is not invariant 00:04:18.153 [2024-07-15 17:23:13.720787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:18.153 [2024-07-15 17:23:13.821638] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:18.153 [2024-07-15 17:23:13.821720] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:04:18.153 [2024-07-15 17:23:13.824380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.153 [2024-07-15 17:23:13.824376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:18.411 17:23:14 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:18.411 17:23:14 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:18.669 17:23:14 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=46327 00:04:18.669 17:23:14 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:18.669 17:23:14 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:18.669 [ 00:04:18.669 "spdk_get_version", 00:04:18.669 "rpc_get_methods", 00:04:18.669 "env_dpdk_get_mem_stats", 00:04:18.669 "trace_get_info", 00:04:18.669 "trace_get_tpoint_group_mask", 00:04:18.669 "trace_disable_tpoint_group", 00:04:18.669 "trace_enable_tpoint_group", 00:04:18.669 "trace_clear_tpoint_mask", 00:04:18.669 "trace_set_tpoint_mask", 00:04:18.669 "notify_get_notifications", 00:04:18.669 "notify_get_types", 00:04:18.669 "accel_get_stats", 00:04:18.669 "accel_set_options", 00:04:18.669 "accel_set_driver", 00:04:18.669 "accel_crypto_key_destroy", 00:04:18.669 "accel_crypto_keys_get", 00:04:18.669 "accel_crypto_key_create", 00:04:18.669 "accel_assign_opc", 00:04:18.669 "accel_get_module_info", 00:04:18.669 "accel_get_opc_assignments", 00:04:18.669 "bdev_get_histogram", 00:04:18.669 "bdev_enable_histogram", 00:04:18.669 "bdev_set_qos_limit", 00:04:18.669 "bdev_set_qd_sampling_period", 00:04:18.669 "bdev_get_bdevs", 00:04:18.669 "bdev_reset_iostat", 00:04:18.669 "bdev_get_iostat", 00:04:18.669 "bdev_examine", 00:04:18.669 "bdev_wait_for_examine", 00:04:18.669 "bdev_set_options", 00:04:18.669 "keyring_get_keys", 00:04:18.669 "framework_get_pci_devices", 00:04:18.669 "framework_get_config", 00:04:18.669 "framework_get_subsystems", 00:04:18.669 "sock_get_default_impl", 00:04:18.669 "sock_set_default_impl", 00:04:18.669 "sock_impl_set_options", 00:04:18.669 "sock_impl_get_options", 00:04:18.669 "thread_set_cpumask", 00:04:18.669 "framework_get_governor", 00:04:18.669 "framework_get_scheduler", 00:04:18.669 "framework_set_scheduler", 00:04:18.669 "framework_get_reactors", 00:04:18.669 "thread_get_io_channels", 00:04:18.669 "thread_get_pollers", 00:04:18.669 "thread_get_stats", 00:04:18.669 "framework_monitor_context_switch", 00:04:18.669 "spdk_kill_instance", 00:04:18.669 "log_enable_timestamps", 00:04:18.669 "log_get_flags", 00:04:18.669 "log_clear_flag", 00:04:18.669 "log_set_flag", 00:04:18.669 "log_get_level", 00:04:18.669 "log_set_level", 00:04:18.669 "log_get_print_level", 00:04:18.669 "log_set_print_level", 00:04:18.669 "framework_enable_cpumask_locks", 00:04:18.669 "framework_disable_cpumask_locks", 00:04:18.669 "framework_wait_init", 00:04:18.669 "framework_start_init", 00:04:18.669 "iobuf_get_stats", 00:04:18.669 "iobuf_set_options", 00:04:18.669 "vmd_rescan", 00:04:18.669 "vmd_remove_device", 00:04:18.669 "vmd_enable", 00:04:18.669 "nvmf_stop_mdns_prr", 00:04:18.669 "nvmf_publish_mdns_prr", 00:04:18.669 "nvmf_subsystem_get_listeners", 00:04:18.669 "nvmf_subsystem_get_qpairs", 00:04:18.669 "nvmf_subsystem_get_controllers", 00:04:18.669 "nvmf_get_stats", 00:04:18.669 "nvmf_get_transports", 00:04:18.669 "nvmf_create_transport", 00:04:18.669 "nvmf_get_targets", 00:04:18.669 "nvmf_delete_target", 00:04:18.669 "nvmf_create_target", 00:04:18.669 "nvmf_subsystem_allow_any_host", 00:04:18.669 "nvmf_subsystem_remove_host", 00:04:18.669 "nvmf_subsystem_add_host", 00:04:18.669 "nvmf_ns_remove_host", 00:04:18.669 "nvmf_ns_add_host", 00:04:18.669 "nvmf_subsystem_remove_ns", 00:04:18.669 "nvmf_subsystem_add_ns", 00:04:18.669 "nvmf_subsystem_listener_set_ana_state", 00:04:18.669 "nvmf_discovery_get_referrals", 00:04:18.669 "nvmf_discovery_remove_referral", 00:04:18.669 "nvmf_discovery_add_referral", 00:04:18.669 "nvmf_subsystem_remove_listener", 00:04:18.669 "nvmf_subsystem_add_listener", 00:04:18.669 "nvmf_delete_subsystem", 00:04:18.669 "nvmf_create_subsystem", 00:04:18.669 "nvmf_get_subsystems", 00:04:18.669 "nvmf_set_crdt", 00:04:18.669 "nvmf_set_config", 00:04:18.669 "nvmf_set_max_subsystems", 00:04:18.669 "scsi_get_devices", 00:04:18.669 "iscsi_get_histogram", 00:04:18.669 "iscsi_enable_histogram", 00:04:18.669 "iscsi_set_options", 00:04:18.669 "iscsi_get_auth_groups", 00:04:18.669 "iscsi_auth_group_remove_secret", 00:04:18.669 "iscsi_auth_group_add_secret", 00:04:18.669 "iscsi_delete_auth_group", 00:04:18.669 "iscsi_create_auth_group", 00:04:18.669 "iscsi_set_discovery_auth", 00:04:18.669 "iscsi_get_options", 00:04:18.669 "iscsi_target_node_request_logout", 00:04:18.669 "iscsi_target_node_set_redirect", 00:04:18.669 "iscsi_target_node_set_auth", 00:04:18.669 "iscsi_target_node_add_lun", 00:04:18.669 "iscsi_get_stats", 00:04:18.669 "iscsi_get_connections", 00:04:18.669 "iscsi_portal_group_set_auth", 00:04:18.669 "iscsi_start_portal_group", 00:04:18.669 "iscsi_delete_portal_group", 00:04:18.669 "iscsi_create_portal_group", 00:04:18.669 "iscsi_get_portal_groups", 00:04:18.669 "iscsi_delete_target_node", 00:04:18.669 "iscsi_target_node_remove_pg_ig_maps", 00:04:18.669 "iscsi_target_node_add_pg_ig_maps", 00:04:18.669 "iscsi_create_target_node", 00:04:18.669 "iscsi_get_target_nodes", 00:04:18.669 "iscsi_delete_initiator_group", 00:04:18.669 "iscsi_initiator_group_remove_initiators", 00:04:18.669 "iscsi_initiator_group_add_initiators", 00:04:18.669 "iscsi_create_initiator_group", 00:04:18.669 "iscsi_get_initiator_groups", 00:04:18.669 "keyring_file_remove_key", 00:04:18.669 "keyring_file_add_key", 00:04:18.669 "iaa_scan_accel_module", 00:04:18.669 "dsa_scan_accel_module", 00:04:18.669 "ioat_scan_accel_module", 00:04:18.669 "accel_error_inject_error", 00:04:18.669 "bdev_aio_delete", 00:04:18.669 "bdev_aio_rescan", 00:04:18.669 "bdev_aio_create", 00:04:18.669 "blobfs_create", 00:04:18.669 "blobfs_detect", 00:04:18.669 "blobfs_set_cache_size", 00:04:18.669 "bdev_zone_block_delete", 00:04:18.669 "bdev_zone_block_create", 00:04:18.669 "bdev_delay_delete", 00:04:18.669 "bdev_delay_create", 00:04:18.669 "bdev_delay_update_latency", 00:04:18.669 "bdev_split_delete", 00:04:18.669 "bdev_split_create", 00:04:18.669 "bdev_error_inject_error", 00:04:18.669 "bdev_error_delete", 00:04:18.669 "bdev_error_create", 00:04:18.669 "bdev_raid_set_options", 00:04:18.669 "bdev_raid_remove_base_bdev", 00:04:18.669 "bdev_raid_add_base_bdev", 00:04:18.669 "bdev_raid_delete", 00:04:18.669 "bdev_raid_create", 00:04:18.669 "bdev_raid_get_bdevs", 00:04:18.669 "bdev_lvol_set_parent_bdev", 00:04:18.669 "bdev_lvol_set_parent", 00:04:18.669 "bdev_lvol_check_shallow_copy", 00:04:18.669 "bdev_lvol_start_shallow_copy", 00:04:18.669 "bdev_lvol_grow_lvstore", 00:04:18.669 "bdev_lvol_get_lvols", 00:04:18.669 "bdev_lvol_get_lvstores", 00:04:18.669 "bdev_lvol_delete", 00:04:18.669 "bdev_lvol_set_read_only", 00:04:18.669 "bdev_lvol_resize", 00:04:18.669 "bdev_lvol_decouple_parent", 00:04:18.669 "bdev_lvol_inflate", 00:04:18.669 "bdev_lvol_rename", 00:04:18.669 "bdev_lvol_clone_bdev", 00:04:18.669 "bdev_lvol_clone", 00:04:18.669 "bdev_lvol_snapshot", 00:04:18.669 "bdev_lvol_create", 00:04:18.669 "bdev_lvol_delete_lvstore", 00:04:18.669 "bdev_lvol_rename_lvstore", 00:04:18.669 "bdev_lvol_create_lvstore", 00:04:18.670 "bdev_passthru_delete", 00:04:18.670 "bdev_passthru_create", 00:04:18.670 "bdev_nvme_send_cmd", 00:04:18.670 "bdev_nvme_get_path_iostat", 00:04:18.670 "bdev_nvme_get_mdns_discovery_info", 00:04:18.670 "bdev_nvme_stop_mdns_discovery", 00:04:18.670 "bdev_nvme_start_mdns_discovery", 00:04:18.670 "bdev_nvme_set_multipath_policy", 00:04:18.670 "bdev_nvme_set_preferred_path", 00:04:18.670 "bdev_nvme_get_io_paths", 00:04:18.670 "bdev_nvme_remove_error_injection", 00:04:18.670 "bdev_nvme_add_error_injection", 00:04:18.670 "bdev_nvme_get_discovery_info", 00:04:18.670 "bdev_nvme_stop_discovery", 00:04:18.670 "bdev_nvme_start_discovery", 00:04:18.670 "bdev_nvme_get_controller_health_info", 00:04:18.670 "bdev_nvme_disable_controller", 00:04:18.670 "bdev_nvme_enable_controller", 00:04:18.670 "bdev_nvme_reset_controller", 00:04:18.670 "bdev_nvme_get_transport_statistics", 00:04:18.670 "bdev_nvme_apply_firmware", 00:04:18.670 "bdev_nvme_detach_controller", 00:04:18.670 "bdev_nvme_get_controllers", 00:04:18.670 "bdev_nvme_attach_controller", 00:04:18.670 "bdev_nvme_set_hotplug", 00:04:18.670 "bdev_nvme_set_options", 00:04:18.670 "bdev_null_resize", 00:04:18.670 "bdev_null_delete", 00:04:18.670 "bdev_null_create", 00:04:18.670 "bdev_malloc_delete", 00:04:18.670 "bdev_malloc_create" 00:04:18.670 ] 00:04:18.926 17:23:14 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:18.926 17:23:14 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:18.926 17:23:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:18.926 17:23:14 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:18.926 17:23:14 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 46319 00:04:18.926 17:23:14 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 46319 ']' 00:04:18.926 17:23:14 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 46319 00:04:18.926 17:23:14 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:18.926 17:23:14 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:04:18.926 17:23:14 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps -c -o command 46319 00:04:18.926 17:23:14 spdkcli_tcp -- common/autotest_common.sh@956 -- # tail -1 00:04:18.926 17:23:14 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:04:18.926 killing process with pid 46319 00:04:18.927 17:23:14 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:04:18.927 17:23:14 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46319' 00:04:18.927 17:23:14 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 46319 00:04:18.927 17:23:14 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 46319 00:04:19.184 00:04:19.184 real 0m1.774s 00:04:19.184 user 0m2.851s 00:04:19.184 sys 0m0.719s 00:04:19.184 ************************************ 00:04:19.184 END TEST spdkcli_tcp 00:04:19.184 ************************************ 00:04:19.184 17:23:14 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:19.184 17:23:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:19.184 17:23:14 -- common/autotest_common.sh@1142 -- # return 0 00:04:19.185 17:23:14 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:19.185 17:23:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:19.185 17:23:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.185 17:23:14 -- common/autotest_common.sh@10 -- # set +x 00:04:19.185 ************************************ 00:04:19.185 START TEST dpdk_mem_utility 00:04:19.185 ************************************ 00:04:19.185 17:23:14 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:19.185 * Looking for test storage... 00:04:19.185 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:19.185 17:23:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:19.185 17:23:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=46398 00:04:19.185 17:23:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 46398 00:04:19.185 17:23:14 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 46398 ']' 00:04:19.185 17:23:14 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:19.185 17:23:14 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:19.185 17:23:14 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:19.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:19.185 17:23:14 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:19.185 17:23:14 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:19.185 17:23:14 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:19.185 [2024-07-15 17:23:14.973227] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:04:19.185 [2024-07-15 17:23:14.973434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:19.748 EAL: TSC is not safe to use in SMP mode 00:04:19.748 EAL: TSC is not invariant 00:04:19.748 [2024-07-15 17:23:15.524303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.004 [2024-07-15 17:23:15.608783] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:20.004 [2024-07-15 17:23:15.610946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.260 17:23:16 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:20.260 17:23:16 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:20.260 17:23:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:20.260 17:23:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:20.260 17:23:16 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:20.260 17:23:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:20.260 { 00:04:20.260 "filename": "/tmp/spdk_mem_dump.txt" 00:04:20.260 } 00:04:20.260 17:23:16 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:20.260 17:23:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:20.260 DPDK memory size 2048.000000 MiB in 1 heap(s) 00:04:20.260 1 heaps totaling size 2048.000000 MiB 00:04:20.260 size: 2048.000000 MiB heap id: 0 00:04:20.260 end heaps---------- 00:04:20.260 8 mempools totaling size 592.563660 MiB 00:04:20.260 size: 212.271240 MiB name: PDU_immediate_data_Pool 00:04:20.260 size: 153.489014 MiB name: PDU_data_out_Pool 00:04:20.260 size: 84.500549 MiB name: bdev_io_46398 00:04:20.260 size: 51.008362 MiB name: evtpool_46398 00:04:20.260 size: 50.000549 MiB name: msgpool_46398 00:04:20.260 size: 21.758911 MiB name: PDU_Pool 00:04:20.260 size: 19.508911 MiB name: SCSI_TASK_Pool 00:04:20.260 size: 0.026123 MiB name: Session_Pool 00:04:20.260 end mempools------- 00:04:20.260 6 memzones totaling size 4.142822 MiB 00:04:20.260 size: 1.000366 MiB name: RG_ring_0_46398 00:04:20.260 size: 1.000366 MiB name: RG_ring_1_46398 00:04:20.260 size: 1.000366 MiB name: RG_ring_4_46398 00:04:20.260 size: 1.000366 MiB name: RG_ring_5_46398 00:04:20.260 size: 0.125366 MiB name: RG_ring_2_46398 00:04:20.260 size: 0.015991 MiB name: RG_ring_3_46398 00:04:20.260 end memzones------- 00:04:20.260 17:23:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:20.516 heap id: 0 total size: 2048.000000 MiB number of busy elements: 41 number of free elements: 3 00:04:20.516 list of free elements. size: 1254.071533 MiB 00:04:20.516 element at address: 0x1060000000 with size: 1254.001099 MiB 00:04:20.516 element at address: 0x10c8000000 with size: 0.070129 MiB 00:04:20.516 element at address: 0x10d98b6000 with size: 0.000305 MiB 00:04:20.516 list of standard malloc elements. size: 197.218323 MiB 00:04:20.516 element at address: 0x10cd4b0f80 with size: 132.000122 MiB 00:04:20.516 element at address: 0x10d58b5f80 with size: 64.000122 MiB 00:04:20.516 element at address: 0x10c7efff80 with size: 1.000122 MiB 00:04:20.516 element at address: 0x10dffd9f00 with size: 0.140747 MiB 00:04:20.516 element at address: 0x10c8020c80 with size: 0.062622 MiB 00:04:20.516 element at address: 0x10dfffdf80 with size: 0.007935 MiB 00:04:20.516 element at address: 0x10d58b1000 with size: 0.000305 MiB 00:04:20.516 element at address: 0x10d58b18c0 with size: 0.000305 MiB 00:04:20.516 element at address: 0x10d58b1140 with size: 0.000183 MiB 00:04:20.516 element at address: 0x10d58b1200 with size: 0.000183 MiB 00:04:20.516 element at address: 0x10d58b12c0 with size: 0.000183 MiB 00:04:20.516 element at address: 0x10d58b1380 with size: 0.000183 MiB 00:04:20.516 element at address: 0x10d58b1440 with size: 0.000183 MiB 00:04:20.516 element at address: 0x10d58b1500 with size: 0.000183 MiB 00:04:20.516 element at address: 0x10d58b15c0 with size: 0.000183 MiB 00:04:20.516 element at address: 0x10d58b1680 with size: 0.000183 MiB 00:04:20.516 element at address: 0x10d58b1740 with size: 0.000183 MiB 00:04:20.516 element at address: 0x10d58b1800 with size: 0.000183 MiB 00:04:20.516 element at address: 0x10d58b1a00 with size: 0.000183 MiB 00:04:20.516 element at address: 0x10d58b1ac0 with size: 0.000183 MiB 00:04:20.516 element at address: 0x10d58b1cc0 with size: 0.000183 MiB 00:04:20.516 element at address: 0x10d98b6140 with size: 0.000183 MiB 00:04:20.516 element at address: 0x10d98b6200 with size: 0.000183 MiB 00:04:20.516 element at address: 0x10d98b62c0 with size: 0.000183 MiB 00:04:20.516 element at address: 0x10d98b6380 with size: 0.000183 MiB 00:04:20.516 element at address: 0x10d98b6440 with size: 0.000183 MiB 00:04:20.516 element at address: 0x10d98b6500 with size: 0.000183 MiB 00:04:20.517 element at address: 0x10d98b6700 with size: 0.000183 MiB 00:04:20.517 element at address: 0x10d98b67c0 with size: 0.000183 MiB 00:04:20.517 element at address: 0x10d98b6880 with size: 0.000183 MiB 00:04:20.517 element at address: 0x10d98b6940 with size: 0.000183 MiB 00:04:20.517 element at address: 0x10d98d6c00 with size: 0.000183 MiB 00:04:20.517 element at address: 0x10d98d6cc0 with size: 0.000183 MiB 00:04:20.517 element at address: 0x10d99d6f80 with size: 0.000183 MiB 00:04:20.517 element at address: 0x10d9ad7240 with size: 0.000183 MiB 00:04:20.517 element at address: 0x10d9ad7300 with size: 0.000183 MiB 00:04:20.517 element at address: 0x10dccd7640 with size: 0.000183 MiB 00:04:20.517 element at address: 0x10dccd7840 with size: 0.000183 MiB 00:04:20.517 element at address: 0x10dccd7900 with size: 0.000183 MiB 00:04:20.517 element at address: 0x10dfed7c40 with size: 0.000183 MiB 00:04:20.517 element at address: 0x10dffd9e40 with size: 0.000183 MiB 00:04:20.517 list of memzone associated elements. size: 596.710144 MiB 00:04:20.517 element at address: 0x10b93f7f00 with size: 211.013000 MiB 00:04:20.517 associated memzone info: size: 211.012878 MiB name: MP_PDU_immediate_data_Pool_0 00:04:20.517 element at address: 0x10afa82c80 with size: 152.449524 MiB 00:04:20.517 associated memzone info: size: 152.449402 MiB name: MP_PDU_data_out_Pool_0 00:04:20.517 element at address: 0x10c8030d00 with size: 84.000122 MiB 00:04:20.517 associated memzone info: size: 84.000000 MiB name: MP_bdev_io_46398_0 00:04:20.517 element at address: 0x10dccd79c0 with size: 48.000122 MiB 00:04:20.517 associated memzone info: size: 48.000000 MiB name: MP_evtpool_46398_0 00:04:20.517 element at address: 0x10d9ad73c0 with size: 48.000122 MiB 00:04:20.517 associated memzone info: size: 48.000000 MiB name: MP_msgpool_46398_0 00:04:20.517 element at address: 0x10c683d780 with size: 20.250671 MiB 00:04:20.517 associated memzone info: size: 20.250549 MiB name: MP_PDU_Pool_0 00:04:20.517 element at address: 0x10ae700680 with size: 18.000671 MiB 00:04:20.517 associated memzone info: size: 18.000549 MiB name: MP_SCSI_TASK_Pool_0 00:04:20.517 element at address: 0x10dfcd7a40 with size: 2.000488 MiB 00:04:20.517 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_46398 00:04:20.517 element at address: 0x10dcad7440 with size: 2.000488 MiB 00:04:20.517 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_46398 00:04:20.517 element at address: 0x10dfed7d00 with size: 1.008118 MiB 00:04:20.517 associated memzone info: size: 1.007996 MiB name: MP_evtpool_46398 00:04:20.517 element at address: 0x10c7cfdc40 with size: 1.008118 MiB 00:04:20.517 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:20.517 element at address: 0x10c673b640 with size: 1.008118 MiB 00:04:20.517 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:20.517 element at address: 0x10b92f5dc0 with size: 1.008118 MiB 00:04:20.517 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:20.517 element at address: 0x10af980b40 with size: 1.008118 MiB 00:04:20.517 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:20.517 element at address: 0x10d99d7040 with size: 1.000488 MiB 00:04:20.517 associated memzone info: size: 1.000366 MiB name: RG_ring_0_46398 00:04:20.517 element at address: 0x10d98d6d80 with size: 1.000488 MiB 00:04:20.517 associated memzone info: size: 1.000366 MiB name: RG_ring_1_46398 00:04:20.517 element at address: 0x10c7dffd80 with size: 1.000488 MiB 00:04:20.517 associated memzone info: size: 1.000366 MiB name: RG_ring_4_46398 00:04:20.517 element at address: 0x10ae600480 with size: 1.000488 MiB 00:04:20.517 associated memzone info: size: 1.000366 MiB name: RG_ring_5_46398 00:04:20.517 element at address: 0x10cd430d80 with size: 0.500488 MiB 00:04:20.517 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_46398 00:04:20.517 element at address: 0x10c7c7da40 with size: 0.500488 MiB 00:04:20.517 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:20.517 element at address: 0x10af900940 with size: 0.500488 MiB 00:04:20.517 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:20.517 element at address: 0x10c66fb440 with size: 0.250488 MiB 00:04:20.517 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:20.517 element at address: 0x10d98b6a00 with size: 0.125488 MiB 00:04:20.517 associated memzone info: size: 0.125366 MiB name: RG_ring_2_46398 00:04:20.517 element at address: 0x10c8018a80 with size: 0.031738 MiB 00:04:20.517 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:20.517 element at address: 0x10c8011f40 with size: 0.023743 MiB 00:04:20.517 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:20.517 element at address: 0x10d58b1d80 with size: 0.016113 MiB 00:04:20.517 associated memzone info: size: 0.015991 MiB name: RG_ring_3_46398 00:04:20.517 element at address: 0x10c8018080 with size: 0.002441 MiB 00:04:20.517 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:20.517 element at address: 0x10dccd7700 with size: 0.000305 MiB 00:04:20.517 associated memzone info: size: 0.000183 MiB name: MP_msgpool_46398 00:04:20.517 element at address: 0x10d58b1b80 with size: 0.000305 MiB 00:04:20.517 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_46398 00:04:20.517 element at address: 0x10d98b65c0 with size: 0.000305 MiB 00:04:20.517 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:20.517 17:23:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:20.517 17:23:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 46398 00:04:20.517 17:23:16 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 46398 ']' 00:04:20.517 17:23:16 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 46398 00:04:20.517 17:23:16 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:20.517 17:23:16 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:04:20.517 17:23:16 dpdk_mem_utility -- common/autotest_common.sh@956 -- # tail -1 00:04:20.517 17:23:16 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps -c -o command 46398 00:04:20.517 17:23:16 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:04:20.517 17:23:16 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:04:20.517 killing process with pid 46398 00:04:20.517 17:23:16 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46398' 00:04:20.517 17:23:16 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 46398 00:04:20.517 17:23:16 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 46398 00:04:20.773 00:04:20.773 real 0m1.538s 00:04:20.773 user 0m1.424s 00:04:20.773 sys 0m0.787s 00:04:20.773 17:23:16 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:20.773 17:23:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:20.773 ************************************ 00:04:20.773 END TEST dpdk_mem_utility 00:04:20.773 ************************************ 00:04:20.773 17:23:16 -- common/autotest_common.sh@1142 -- # return 0 00:04:20.773 17:23:16 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:20.773 17:23:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.773 17:23:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.773 17:23:16 -- common/autotest_common.sh@10 -- # set +x 00:04:20.773 ************************************ 00:04:20.773 START TEST event 00:04:20.773 ************************************ 00:04:20.773 17:23:16 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:20.773 * Looking for test storage... 00:04:20.773 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:20.773 17:23:16 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:20.774 17:23:16 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:20.774 17:23:16 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:20.774 17:23:16 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:20.774 17:23:16 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.774 17:23:16 event -- common/autotest_common.sh@10 -- # set +x 00:04:20.774 ************************************ 00:04:20.774 START TEST event_perf 00:04:20.774 ************************************ 00:04:20.774 17:23:16 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:20.774 Running I/O for 1 seconds...[2024-07-15 17:23:16.566576] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:04:20.774 [2024-07-15 17:23:16.566758] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:21.387 EAL: TSC is not safe to use in SMP mode 00:04:21.387 EAL: TSC is not invariant 00:04:21.387 [2024-07-15 17:23:17.139484] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:21.645 [2024-07-15 17:23:17.224847] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:21.645 [2024-07-15 17:23:17.224904] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:04:21.645 [2024-07-15 17:23:17.224914] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:04:21.645 [2024-07-15 17:23:17.224922] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:04:21.645 [2024-07-15 17:23:17.228966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:21.645 [2024-07-15 17:23:17.229171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.645 [2024-07-15 17:23:17.229071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:21.645 Running I/O for 1 seconds...[2024-07-15 17:23:17.229169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:22.576 00:04:22.576 lcore 0: 2501601 00:04:22.576 lcore 1: 2501602 00:04:22.576 lcore 2: 2501599 00:04:22.576 lcore 3: 2501601 00:04:22.576 done. 00:04:22.576 00:04:22.576 real 0m1.828s 00:04:22.576 user 0m4.208s 00:04:22.576 sys 0m0.616s 00:04:22.576 17:23:18 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:22.576 17:23:18 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:22.576 ************************************ 00:04:22.576 END TEST event_perf 00:04:22.576 ************************************ 00:04:22.834 17:23:18 event -- common/autotest_common.sh@1142 -- # return 0 00:04:22.834 17:23:18 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:22.834 17:23:18 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:22.834 17:23:18 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:22.834 17:23:18 event -- common/autotest_common.sh@10 -- # set +x 00:04:22.834 ************************************ 00:04:22.834 START TEST event_reactor 00:04:22.834 ************************************ 00:04:22.834 17:23:18 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:22.834 [2024-07-15 17:23:18.436771] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:04:22.834 [2024-07-15 17:23:18.437007] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:23.400 EAL: TSC is not safe to use in SMP mode 00:04:23.400 EAL: TSC is not invariant 00:04:23.400 [2024-07-15 17:23:18.968641] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.400 [2024-07-15 17:23:19.072712] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:23.400 [2024-07-15 17:23:19.075081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.770 test_start 00:04:24.770 oneshot 00:04:24.770 tick 100 00:04:24.770 tick 100 00:04:24.770 tick 250 00:04:24.770 tick 100 00:04:24.770 tick 100 00:04:24.770 tick 100 00:04:24.770 tick 250 00:04:24.770 tick 500 00:04:24.770 tick 100 00:04:24.770 tick 100 00:04:24.770 tick 250 00:04:24.770 tick 100 00:04:24.770 tick 100 00:04:24.770 test_end 00:04:24.770 00:04:24.770 real 0m1.755s 00:04:24.770 user 0m1.187s 00:04:24.770 sys 0m0.567s 00:04:24.770 17:23:20 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:24.770 17:23:20 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:24.770 ************************************ 00:04:24.770 END TEST event_reactor 00:04:24.770 ************************************ 00:04:24.770 17:23:20 event -- common/autotest_common.sh@1142 -- # return 0 00:04:24.770 17:23:20 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:24.770 17:23:20 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:24.770 17:23:20 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.770 17:23:20 event -- common/autotest_common.sh@10 -- # set +x 00:04:24.770 ************************************ 00:04:24.770 START TEST event_reactor_perf 00:04:24.770 ************************************ 00:04:24.770 17:23:20 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:24.770 [2024-07-15 17:23:20.233107] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:04:24.770 [2024-07-15 17:23:20.233372] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:25.027 EAL: TSC is not safe to use in SMP mode 00:04:25.027 EAL: TSC is not invariant 00:04:25.027 [2024-07-15 17:23:20.752719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.027 [2024-07-15 17:23:20.836933] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:25.027 [2024-07-15 17:23:20.839099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.419 test_start 00:04:26.419 test_end 00:04:26.419 Performance: 3587236 events per second 00:04:26.419 00:04:26.419 real 0m1.730s 00:04:26.419 user 0m1.175s 00:04:26.419 sys 0m0.552s 00:04:26.419 17:23:21 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.419 ************************************ 00:04:26.419 END TEST event_reactor_perf 00:04:26.419 17:23:21 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:26.419 ************************************ 00:04:26.419 17:23:21 event -- common/autotest_common.sh@1142 -- # return 0 00:04:26.419 17:23:21 event -- event/event.sh@49 -- # uname -s 00:04:26.419 17:23:21 event -- event/event.sh@49 -- # '[' FreeBSD = Linux ']' 00:04:26.419 00:04:26.419 real 0m5.567s 00:04:26.419 user 0m6.718s 00:04:26.419 sys 0m1.874s 00:04:26.419 17:23:21 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.419 17:23:21 event -- common/autotest_common.sh@10 -- # set +x 00:04:26.419 ************************************ 00:04:26.419 END TEST event 00:04:26.419 ************************************ 00:04:26.419 17:23:22 -- common/autotest_common.sh@1142 -- # return 0 00:04:26.419 17:23:22 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:04:26.419 17:23:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.419 17:23:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.419 17:23:22 -- common/autotest_common.sh@10 -- # set +x 00:04:26.419 ************************************ 00:04:26.419 START TEST thread 00:04:26.419 ************************************ 00:04:26.419 17:23:22 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:04:26.419 * Looking for test storage... 00:04:26.419 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:04:26.419 17:23:22 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:26.419 17:23:22 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:04:26.419 17:23:22 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.419 17:23:22 thread -- common/autotest_common.sh@10 -- # set +x 00:04:26.419 ************************************ 00:04:26.419 START TEST thread_poller_perf 00:04:26.419 ************************************ 00:04:26.419 17:23:22 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:26.419 [2024-07-15 17:23:22.210454] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:04:26.419 [2024-07-15 17:23:22.210665] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:26.987 EAL: TSC is not safe to use in SMP mode 00:04:26.987 EAL: TSC is not invariant 00:04:26.987 [2024-07-15 17:23:22.744371] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:27.245 [2024-07-15 17:23:22.846831] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:27.245 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:27.245 [2024-07-15 17:23:22.849358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.178 ====================================== 00:04:28.178 busy:2201895956 (cyc) 00:04:28.178 total_run_count: 5490000 00:04:28.178 tsc_hz: 2199996845 (cyc) 00:04:28.178 ====================================== 00:04:28.178 poller_cost: 401 (cyc), 182 (nsec) 00:04:28.178 00:04:28.178 real 0m1.762s 00:04:28.178 user 0m1.201s 00:04:28.178 sys 0m0.560s 00:04:28.178 17:23:23 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:28.178 ************************************ 00:04:28.178 END TEST thread_poller_perf 00:04:28.178 17:23:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:28.178 ************************************ 00:04:28.178 17:23:23 thread -- common/autotest_common.sh@1142 -- # return 0 00:04:28.178 17:23:23 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:28.178 17:23:23 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:04:28.178 17:23:23 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:28.178 17:23:23 thread -- common/autotest_common.sh@10 -- # set +x 00:04:28.178 ************************************ 00:04:28.178 START TEST thread_poller_perf 00:04:28.178 ************************************ 00:04:28.178 17:23:24 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:28.436 [2024-07-15 17:23:24.010108] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:04:28.436 [2024-07-15 17:23:24.010378] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:29.003 EAL: TSC is not safe to use in SMP mode 00:04:29.003 EAL: TSC is not invariant 00:04:29.003 [2024-07-15 17:23:24.539148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.003 [2024-07-15 17:23:24.635028] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:29.003 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:29.003 [2024-07-15 17:23:24.637288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.939 ====================================== 00:04:29.939 busy:2201108910 (cyc) 00:04:29.939 total_run_count: 70399000 00:04:29.939 tsc_hz: 2199996845 (cyc) 00:04:29.939 ====================================== 00:04:29.939 poller_cost: 31 (cyc), 14 (nsec) 00:04:29.939 00:04:29.939 real 0m1.750s 00:04:29.939 user 0m1.181s 00:04:29.939 sys 0m0.568s 00:04:29.939 17:23:25 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.939 17:23:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:29.939 ************************************ 00:04:29.939 END TEST thread_poller_perf 00:04:29.939 ************************************ 00:04:30.198 17:23:25 thread -- common/autotest_common.sh@1142 -- # return 0 00:04:30.198 17:23:25 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:04:30.198 17:23:25 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:04:30.198 17:23:25 thread -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.198 17:23:25 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.198 17:23:25 thread -- common/autotest_common.sh@10 -- # set +x 00:04:30.198 ************************************ 00:04:30.198 START TEST thread_spdk_lock 00:04:30.198 ************************************ 00:04:30.198 17:23:25 thread.thread_spdk_lock -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:04:30.198 [2024-07-15 17:23:25.806220] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:04:30.198 [2024-07-15 17:23:25.806485] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:30.764 EAL: TSC is not safe to use in SMP mode 00:04:30.764 EAL: TSC is not invariant 00:04:30.764 [2024-07-15 17:23:26.351370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:30.764 [2024-07-15 17:23:26.440682] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:30.764 [2024-07-15 17:23:26.440742] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:04:30.764 [2024-07-15 17:23:26.443386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.764 [2024-07-15 17:23:26.443375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:31.330 [2024-07-15 17:23:26.879473] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 965:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:04:31.330 [2024-07-15 17:23:26.879553] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:04:31.330 [2024-07-15 17:23:26.879565] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x315be0 00:04:31.330 [2024-07-15 17:23:26.880058] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:04:31.330 [2024-07-15 17:23:26.880159] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1026:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:04:31.330 [2024-07-15 17:23:26.880168] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:04:31.330 Starting test contend 00:04:31.330 Worker Delay Wait us Hold us Total us 00:04:31.330 0 3 259680 161991 421671 00:04:31.330 1 5 161600 263052 424653 00:04:31.330 PASS test contend 00:04:31.330 Starting test hold_by_poller 00:04:31.330 PASS test hold_by_poller 00:04:31.330 Starting test hold_by_message 00:04:31.330 PASS test hold_by_message 00:04:31.330 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:04:31.330 100014 assertions passed 00:04:31.330 0 assertions failed 00:04:31.330 00:04:31.330 real 0m1.198s 00:04:31.330 user 0m1.067s 00:04:31.330 sys 0m0.565s 00:04:31.330 17:23:26 thread.thread_spdk_lock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.330 ************************************ 00:04:31.330 END TEST thread_spdk_lock 00:04:31.330 ************************************ 00:04:31.330 17:23:26 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:04:31.330 17:23:27 thread -- common/autotest_common.sh@1142 -- # return 0 00:04:31.330 00:04:31.330 real 0m4.993s 00:04:31.330 user 0m3.621s 00:04:31.330 sys 0m1.868s 00:04:31.330 17:23:27 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:31.330 17:23:27 thread -- common/autotest_common.sh@10 -- # set +x 00:04:31.330 ************************************ 00:04:31.330 END TEST thread 00:04:31.330 ************************************ 00:04:31.330 17:23:27 -- common/autotest_common.sh@1142 -- # return 0 00:04:31.330 17:23:27 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:04:31.330 17:23:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:31.330 17:23:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:31.330 17:23:27 -- common/autotest_common.sh@10 -- # set +x 00:04:31.330 ************************************ 00:04:31.330 START TEST accel 00:04:31.330 ************************************ 00:04:31.330 17:23:27 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:04:31.588 * Looking for test storage... 00:04:31.588 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:04:31.588 17:23:27 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:04:31.588 17:23:27 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:04:31.588 17:23:27 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:31.588 17:23:27 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=46702 00:04:31.588 17:23:27 accel -- accel/accel.sh@63 -- # waitforlisten 46702 00:04:31.588 17:23:27 accel -- common/autotest_common.sh@829 -- # '[' -z 46702 ']' 00:04:31.588 17:23:27 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:31.588 17:23:27 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:31.588 17:23:27 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:31.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:31.588 17:23:27 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:31.588 17:23:27 accel -- common/autotest_common.sh@10 -- # set +x 00:04:31.588 17:23:27 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /tmp//sh-np.CEsDRM 00:04:31.588 [2024-07-15 17:23:27.207876] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:04:31.588 [2024-07-15 17:23:27.208081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:32.154 EAL: TSC is not safe to use in SMP mode 00:04:32.154 EAL: TSC is not invariant 00:04:32.154 [2024-07-15 17:23:27.775391] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.154 [2024-07-15 17:23:27.866020] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:32.154 17:23:27 accel -- accel/accel.sh@61 -- # build_accel_config 00:04:32.154 17:23:27 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:32.154 17:23:27 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:32.154 17:23:27 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:32.154 17:23:27 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:32.154 17:23:27 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:32.154 17:23:27 accel -- accel/accel.sh@40 -- # local IFS=, 00:04:32.154 17:23:27 accel -- accel/accel.sh@41 -- # jq -r . 00:04:32.154 [2024-07-15 17:23:27.877413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.721 17:23:28 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:32.721 17:23:28 accel -- common/autotest_common.sh@862 -- # return 0 00:04:32.721 17:23:28 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:04:32.721 17:23:28 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:04:32.721 17:23:28 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:04:32.721 17:23:28 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:04:32.721 17:23:28 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:04:32.721 17:23:28 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:04:32.721 17:23:28 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:32.721 17:23:28 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:04:32.721 17:23:28 accel -- common/autotest_common.sh@10 -- # set +x 00:04:32.721 17:23:28 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:32.721 17:23:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:32.721 17:23:28 accel -- accel/accel.sh@72 -- # IFS== 00:04:32.721 17:23:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:32.721 17:23:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:32.721 17:23:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:32.721 17:23:28 accel -- accel/accel.sh@72 -- # IFS== 00:04:32.721 17:23:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:32.721 17:23:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:32.721 17:23:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:32.721 17:23:28 accel -- accel/accel.sh@72 -- # IFS== 00:04:32.721 17:23:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:32.721 17:23:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:32.721 17:23:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:32.721 17:23:28 accel -- accel/accel.sh@72 -- # IFS== 00:04:32.721 17:23:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:32.721 17:23:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:32.721 17:23:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:32.721 17:23:28 accel -- accel/accel.sh@72 -- # IFS== 00:04:32.721 17:23:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:32.721 17:23:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:32.721 17:23:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:32.721 17:23:28 accel -- accel/accel.sh@72 -- # IFS== 00:04:32.721 17:23:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:32.721 17:23:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:32.721 17:23:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:32.721 17:23:28 accel -- accel/accel.sh@72 -- # IFS== 00:04:32.721 17:23:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:32.721 17:23:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:32.721 17:23:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:32.721 17:23:28 accel -- accel/accel.sh@72 -- # IFS== 00:04:32.721 17:23:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:32.721 17:23:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:32.721 17:23:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:32.721 17:23:28 accel -- accel/accel.sh@72 -- # IFS== 00:04:32.721 17:23:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:32.721 17:23:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:32.721 17:23:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:32.721 17:23:28 accel -- accel/accel.sh@72 -- # IFS== 00:04:32.721 17:23:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:32.721 17:23:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:32.721 17:23:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:32.721 17:23:28 accel -- accel/accel.sh@72 -- # IFS== 00:04:32.721 17:23:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:32.721 17:23:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:32.721 17:23:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:32.721 17:23:28 accel -- accel/accel.sh@72 -- # IFS== 00:04:32.721 17:23:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:32.721 17:23:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:32.721 17:23:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:32.721 17:23:28 accel -- accel/accel.sh@72 -- # IFS== 00:04:32.721 17:23:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:32.721 17:23:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:32.721 17:23:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:32.721 17:23:28 accel -- accel/accel.sh@72 -- # IFS== 00:04:32.721 17:23:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:32.721 17:23:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:32.721 17:23:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:32.721 17:23:28 accel -- accel/accel.sh@72 -- # IFS== 00:04:32.721 17:23:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:32.721 17:23:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:32.721 17:23:28 accel -- accel/accel.sh@75 -- # killprocess 46702 00:04:32.721 17:23:28 accel -- common/autotest_common.sh@948 -- # '[' -z 46702 ']' 00:04:32.721 17:23:28 accel -- common/autotest_common.sh@952 -- # kill -0 46702 00:04:32.721 17:23:28 accel -- common/autotest_common.sh@953 -- # uname 00:04:32.721 17:23:28 accel -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:04:32.721 17:23:28 accel -- common/autotest_common.sh@956 -- # ps -c -o command 46702 00:04:32.721 17:23:28 accel -- common/autotest_common.sh@956 -- # tail -1 00:04:32.721 17:23:28 accel -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:04:32.721 17:23:28 accel -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:04:32.721 17:23:28 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46702' 00:04:32.721 killing process with pid 46702 00:04:32.721 17:23:28 accel -- common/autotest_common.sh@967 -- # kill 46702 00:04:32.721 17:23:28 accel -- common/autotest_common.sh@972 -- # wait 46702 00:04:32.721 17:23:28 accel -- accel/accel.sh@76 -- # trap - ERR 00:04:32.721 17:23:28 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:04:32.721 17:23:28 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:04:32.721 17:23:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.721 17:23:28 accel -- common/autotest_common.sh@10 -- # set +x 00:04:32.980 17:23:28 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:04:32.980 17:23:28 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.6Lz9YA -h 00:04:32.980 17:23:28 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.980 17:23:28 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:04:32.980 17:23:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:32.980 17:23:28 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:04:32.980 17:23:28 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:32.980 17:23:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.980 17:23:28 accel -- common/autotest_common.sh@10 -- # set +x 00:04:32.980 ************************************ 00:04:32.980 START TEST accel_missing_filename 00:04:32.980 ************************************ 00:04:32.980 17:23:28 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:04:32.980 17:23:28 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:04:32.980 17:23:28 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:04:32.980 17:23:28 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:32.980 17:23:28 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:32.980 17:23:28 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:32.980 17:23:28 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:32.980 17:23:28 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:04:32.980 17:23:28 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.cQ1F3M -t 1 -w compress 00:04:32.980 [2024-07-15 17:23:28.620073] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:04:32.980 [2024-07-15 17:23:28.620343] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:33.558 EAL: TSC is not safe to use in SMP mode 00:04:33.558 EAL: TSC is not invariant 00:04:33.558 [2024-07-15 17:23:29.137021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.558 [2024-07-15 17:23:29.238263] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:33.558 17:23:29 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:04:33.558 17:23:29 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:33.558 17:23:29 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:33.558 17:23:29 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:33.558 17:23:29 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:33.558 17:23:29 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:33.558 17:23:29 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:04:33.558 17:23:29 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:04:33.558 [2024-07-15 17:23:29.249638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.558 [2024-07-15 17:23:29.252536] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:33.558 [2024-07-15 17:23:29.288631] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:04:33.843 A filename is required. 00:04:33.843 17:23:29 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:04:33.843 17:23:29 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:33.843 17:23:29 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:04:33.843 17:23:29 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:04:33.843 17:23:29 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:04:33.843 17:23:29 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:33.843 00:04:33.843 real 0m0.808s 00:04:33.843 user 0m0.254s 00:04:33.843 sys 0m0.552s 00:04:33.843 17:23:29 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.843 17:23:29 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:04:33.843 ************************************ 00:04:33.843 END TEST accel_missing_filename 00:04:33.843 ************************************ 00:04:33.843 17:23:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:33.843 17:23:29 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:33.843 17:23:29 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:04:33.843 17:23:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.843 17:23:29 accel -- common/autotest_common.sh@10 -- # set +x 00:04:33.843 ************************************ 00:04:33.843 START TEST accel_compress_verify 00:04:33.843 ************************************ 00:04:33.843 17:23:29 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:33.843 17:23:29 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:04:33.843 17:23:29 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:33.843 17:23:29 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:33.843 17:23:29 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:33.843 17:23:29 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:33.843 17:23:29 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:33.843 17:23:29 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:33.843 17:23:29 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.wpRsZi -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:33.843 [2024-07-15 17:23:29.469412] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:04:33.843 [2024-07-15 17:23:29.469553] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:34.409 EAL: TSC is not safe to use in SMP mode 00:04:34.409 EAL: TSC is not invariant 00:04:34.409 [2024-07-15 17:23:30.016800] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.409 [2024-07-15 17:23:30.131812] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:34.409 17:23:30 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:04:34.409 17:23:30 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:34.409 17:23:30 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:34.409 17:23:30 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:34.409 17:23:30 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:34.409 17:23:30 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:34.409 17:23:30 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:04:34.409 17:23:30 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:04:34.409 [2024-07-15 17:23:30.144144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.409 [2024-07-15 17:23:30.147673] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:34.409 [2024-07-15 17:23:30.187637] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:04:34.667 00:04:34.667 Compression does not support the verify option, aborting. 00:04:34.667 17:23:30 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=211 00:04:34.667 17:23:30 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:34.667 17:23:30 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=83 00:04:34.667 17:23:30 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:04:34.667 17:23:30 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:04:34.667 17:23:30 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:34.667 00:04:34.667 real 0m0.874s 00:04:34.667 user 0m0.288s 00:04:34.667 sys 0m0.587s 00:04:34.667 17:23:30 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.667 ************************************ 00:04:34.667 17:23:30 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:04:34.667 END TEST accel_compress_verify 00:04:34.667 ************************************ 00:04:34.667 17:23:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:34.667 17:23:30 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:04:34.667 17:23:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:34.667 17:23:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.667 17:23:30 accel -- common/autotest_common.sh@10 -- # set +x 00:04:34.667 ************************************ 00:04:34.667 START TEST accel_wrong_workload 00:04:34.667 ************************************ 00:04:34.667 17:23:30 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:04:34.667 17:23:30 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:04:34.667 17:23:30 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:04:34.668 17:23:30 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:34.668 17:23:30 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:34.668 17:23:30 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:34.668 17:23:30 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:34.668 17:23:30 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:04:34.668 17:23:30 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.IZenpn -t 1 -w foobar 00:04:34.668 Unsupported workload type: foobar 00:04:34.668 [2024-07-15 17:23:30.391163] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:04:34.668 accel_perf options: 00:04:34.668 [-h help message] 00:04:34.668 [-q queue depth per core] 00:04:34.668 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:34.668 [-T number of threads per core 00:04:34.668 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:34.668 [-t time in seconds] 00:04:34.668 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:34.668 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:04:34.668 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:34.668 [-l for compress/decompress workloads, name of uncompressed input file 00:04:34.668 [-S for crc32c workload, use this seed value (default 0) 00:04:34.668 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:34.668 [-f for fill workload, use this BYTE value (default 255) 00:04:34.668 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:34.668 [-y verify result if this switch is on] 00:04:34.668 [-a tasks to allocate per core (default: same value as -q)] 00:04:34.668 Can be used to spread operations across a wider range of memory. 00:04:34.668 17:23:30 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:04:34.668 17:23:30 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:34.668 17:23:30 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:34.668 17:23:30 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:34.668 00:04:34.668 real 0m0.010s 00:04:34.668 user 0m0.007s 00:04:34.668 sys 0m0.006s 00:04:34.668 ************************************ 00:04:34.668 END TEST accel_wrong_workload 00:04:34.668 17:23:30 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.668 17:23:30 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:04:34.668 ************************************ 00:04:34.668 17:23:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:34.668 17:23:30 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:04:34.668 17:23:30 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:04:34.668 17:23:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.668 17:23:30 accel -- common/autotest_common.sh@10 -- # set +x 00:04:34.668 ************************************ 00:04:34.668 START TEST accel_negative_buffers 00:04:34.668 ************************************ 00:04:34.668 17:23:30 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:04:34.668 17:23:30 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:04:34.668 17:23:30 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:04:34.668 17:23:30 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:34.668 17:23:30 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:34.668 17:23:30 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:34.668 17:23:30 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:34.668 17:23:30 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:04:34.668 17:23:30 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.Iwbpfd -t 1 -w xor -y -x -1 00:04:34.668 -x option must be non-negative. 00:04:34.668 [2024-07-15 17:23:30.441493] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:04:34.668 accel_perf options: 00:04:34.668 [-h help message] 00:04:34.668 [-q queue depth per core] 00:04:34.668 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:34.668 [-T number of threads per core 00:04:34.668 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:34.668 [-t time in seconds] 00:04:34.668 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:34.668 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:04:34.668 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:34.668 [-l for compress/decompress workloads, name of uncompressed input file 00:04:34.668 [-S for crc32c workload, use this seed value (default 0) 00:04:34.668 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:34.668 [-f for fill workload, use this BYTE value (default 255) 00:04:34.668 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:34.668 [-y verify result if this switch is on] 00:04:34.668 [-a tasks to allocate per core (default: same value as -q)] 00:04:34.668 Can be used to spread operations across a wider range of memory. 00:04:34.668 17:23:30 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:04:34.668 17:23:30 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:34.668 17:23:30 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:34.668 17:23:30 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:34.668 00:04:34.668 real 0m0.010s 00:04:34.668 user 0m0.010s 00:04:34.668 sys 0m0.001s 00:04:34.668 17:23:30 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.668 17:23:30 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:04:34.668 ************************************ 00:04:34.668 END TEST accel_negative_buffers 00:04:34.668 ************************************ 00:04:34.668 17:23:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:34.668 17:23:30 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:04:34.668 17:23:30 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:04:34.668 17:23:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.668 17:23:30 accel -- common/autotest_common.sh@10 -- # set +x 00:04:34.668 ************************************ 00:04:34.668 START TEST accel_crc32c 00:04:34.668 ************************************ 00:04:34.668 17:23:30 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:04:34.668 17:23:30 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:04:34.668 17:23:30 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:04:34.668 17:23:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:34.668 17:23:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:34.668 17:23:30 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:04:34.668 17:23:30 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.W15Ye1 -t 1 -w crc32c -S 32 -y 00:04:34.668 [2024-07-15 17:23:30.493918] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:04:34.668 [2024-07-15 17:23:30.494171] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:35.233 EAL: TSC is not safe to use in SMP mode 00:04:35.233 EAL: TSC is not invariant 00:04:35.233 [2024-07-15 17:23:31.041880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.491 [2024-07-15 17:23:31.131972] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:04:35.491 [2024-07-15 17:23:31.139854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:35.491 17:23:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:36.865 17:23:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:36.865 17:23:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:36.865 17:23:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:36.865 17:23:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:36.865 17:23:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:36.865 17:23:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:36.865 17:23:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:36.865 17:23:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:36.865 17:23:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:36.865 17:23:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:36.865 17:23:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:36.865 17:23:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:36.865 17:23:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:36.865 17:23:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:36.865 17:23:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:36.865 17:23:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:36.865 17:23:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:36.865 17:23:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:36.865 17:23:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:36.865 17:23:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:36.865 17:23:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:36.865 17:23:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:36.865 17:23:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:36.865 17:23:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:36.865 17:23:32 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:36.865 17:23:32 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:04:36.865 17:23:32 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:36.865 00:04:36.865 real 0m1.814s 00:04:36.865 user 0m1.243s 00:04:36.865 sys 0m0.582s 00:04:36.865 17:23:32 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.865 17:23:32 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:04:36.865 ************************************ 00:04:36.865 END TEST accel_crc32c 00:04:36.865 ************************************ 00:04:36.865 17:23:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:36.865 17:23:32 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:04:36.865 17:23:32 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:04:36.865 17:23:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:36.865 17:23:32 accel -- common/autotest_common.sh@10 -- # set +x 00:04:36.865 ************************************ 00:04:36.865 START TEST accel_crc32c_C2 00:04:36.865 ************************************ 00:04:36.865 17:23:32 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:04:36.865 17:23:32 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:04:36.865 17:23:32 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:04:36.865 17:23:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:36.865 17:23:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:36.865 17:23:32 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:04:36.865 17:23:32 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.nWruRG -t 1 -w crc32c -y -C 2 00:04:36.865 [2024-07-15 17:23:32.354109] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:04:36.865 [2024-07-15 17:23:32.354361] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:37.123 EAL: TSC is not safe to use in SMP mode 00:04:37.123 EAL: TSC is not invariant 00:04:37.123 [2024-07-15 17:23:32.905639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.381 [2024-07-15 17:23:32.999074] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:04:37.381 [2024-07-15 17:23:33.010028] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:37.381 17:23:33 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:38.765 17:23:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:38.765 17:23:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:38.765 17:23:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:38.765 17:23:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:38.765 17:23:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:38.765 17:23:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:38.765 17:23:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:38.765 17:23:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:38.765 17:23:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:38.765 17:23:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:38.765 17:23:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:38.765 17:23:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:38.765 17:23:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:38.765 17:23:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:38.765 17:23:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:38.765 17:23:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:38.765 17:23:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:38.765 17:23:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:38.765 17:23:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:38.765 17:23:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:38.765 17:23:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:38.765 17:23:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:38.765 17:23:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:38.765 17:23:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:38.765 17:23:34 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:38.765 17:23:34 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:04:38.765 17:23:34 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:38.765 00:04:38.765 real 0m1.826s 00:04:38.765 user 0m1.243s 00:04:38.765 sys 0m0.592s 00:04:38.765 17:23:34 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.765 17:23:34 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:04:38.765 ************************************ 00:04:38.765 END TEST accel_crc32c_C2 00:04:38.765 ************************************ 00:04:38.765 17:23:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:38.765 17:23:34 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:04:38.765 17:23:34 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:38.765 17:23:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.765 17:23:34 accel -- common/autotest_common.sh@10 -- # set +x 00:04:38.765 ************************************ 00:04:38.765 START TEST accel_copy 00:04:38.765 ************************************ 00:04:38.765 17:23:34 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:04:38.765 17:23:34 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:04:38.766 17:23:34 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:04:38.766 17:23:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:38.766 17:23:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:38.766 17:23:34 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:04:38.766 17:23:34 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.CmWvyg -t 1 -w copy -y 00:04:38.766 [2024-07-15 17:23:34.214998] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:04:38.766 [2024-07-15 17:23:34.215192] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:39.024 EAL: TSC is not safe to use in SMP mode 00:04:39.024 EAL: TSC is not invariant 00:04:39.024 [2024-07-15 17:23:34.743487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.024 [2024-07-15 17:23:34.837778] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:39.024 17:23:34 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:04:39.024 17:23:34 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:39.024 17:23:34 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:39.024 17:23:34 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:39.024 17:23:34 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:39.024 17:23:34 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:39.024 17:23:34 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:04:39.024 17:23:34 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:04:39.024 [2024-07-15 17:23:34.847230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.024 17:23:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:39.024 17:23:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:39.024 17:23:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:39.024 17:23:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:39.024 17:23:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:39.024 17:23:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:39.024 17:23:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:39.024 17:23:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:39.024 17:23:34 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:04:39.024 17:23:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:39.024 17:23:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:39.024 17:23:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:39.024 17:23:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:39.024 17:23:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:39.024 17:23:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:39.024 17:23:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:39.024 17:23:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:39.024 17:23:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:39.024 17:23:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:39.024 17:23:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:39.024 17:23:34 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:04:39.024 17:23:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:39.024 17:23:34 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:04:39.024 17:23:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:39.024 17:23:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:39.024 17:23:34 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:39.024 17:23:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:39.024 17:23:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:39.025 17:23:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:39.025 17:23:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:39.025 17:23:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:39.025 17:23:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:39.025 17:23:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:39.025 17:23:34 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:04:39.025 17:23:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:39.025 17:23:34 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:04:39.025 17:23:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:39.025 17:23:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:39.025 17:23:34 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:04:39.025 17:23:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:39.025 17:23:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:39.025 17:23:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:39.025 17:23:34 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:04:39.025 17:23:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:39.025 17:23:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:39.025 17:23:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:39.025 17:23:34 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:04:39.025 17:23:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:39.025 17:23:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:39.025 17:23:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:39.025 17:23:34 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:04:39.025 17:23:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:39.025 17:23:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:39.025 17:23:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:39.025 17:23:34 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:04:39.025 17:23:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:39.025 17:23:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:39.282 17:23:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:39.282 17:23:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:39.282 17:23:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:39.282 17:23:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:39.282 17:23:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:39.282 17:23:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:39.282 17:23:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:39.282 17:23:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:39.282 17:23:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:40.212 17:23:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:40.212 17:23:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:40.212 17:23:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:40.212 17:23:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:40.212 17:23:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:40.212 17:23:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:40.212 17:23:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:40.212 17:23:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:40.212 17:23:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:40.212 17:23:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:40.212 17:23:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:40.212 17:23:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:40.212 17:23:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:40.212 17:23:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:40.212 17:23:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:40.212 17:23:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:40.212 17:23:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:40.212 17:23:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:40.212 17:23:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:40.212 17:23:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:40.212 17:23:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:04:40.212 17:23:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:40.212 17:23:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:04:40.212 17:23:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:04:40.212 17:23:36 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:40.212 17:23:36 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:04:40.212 17:23:36 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:40.212 00:04:40.212 real 0m1.801s 00:04:40.212 user 0m1.232s 00:04:40.212 sys 0m0.576s 00:04:40.212 17:23:36 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.212 ************************************ 00:04:40.212 END TEST accel_copy 00:04:40.212 ************************************ 00:04:40.212 17:23:36 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:04:40.470 17:23:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:40.470 17:23:36 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:40.470 17:23:36 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:04:40.470 17:23:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.470 17:23:36 accel -- common/autotest_common.sh@10 -- # set +x 00:04:40.470 ************************************ 00:04:40.470 START TEST accel_fill 00:04:40.470 ************************************ 00:04:40.470 17:23:36 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:40.470 17:23:36 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:04:40.470 17:23:36 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:04:40.470 17:23:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:40.470 17:23:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:40.470 17:23:36 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:40.470 17:23:36 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.ICs0IW -t 1 -w fill -f 128 -q 64 -a 64 -y 00:04:40.470 [2024-07-15 17:23:36.061779] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:04:40.470 [2024-07-15 17:23:36.061964] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:41.033 EAL: TSC is not safe to use in SMP mode 00:04:41.033 EAL: TSC is not invariant 00:04:41.033 [2024-07-15 17:23:36.602816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.033 [2024-07-15 17:23:36.693863] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:04:41.033 [2024-07-15 17:23:36.702604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:41.033 17:23:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:41.034 17:23:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:41.034 17:23:36 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:04:41.034 17:23:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:41.034 17:23:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:41.034 17:23:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:41.034 17:23:36 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:04:41.034 17:23:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:41.034 17:23:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:41.034 17:23:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:41.034 17:23:36 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:04:41.034 17:23:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:41.034 17:23:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:41.034 17:23:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:41.034 17:23:36 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:04:41.034 17:23:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:41.034 17:23:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:41.034 17:23:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:41.034 17:23:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:41.034 17:23:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:41.034 17:23:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:41.034 17:23:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:41.034 17:23:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:41.034 17:23:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:41.034 17:23:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:41.034 17:23:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:42.403 17:23:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:42.403 17:23:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:42.403 17:23:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:42.403 17:23:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:42.403 17:23:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:42.403 17:23:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:42.403 17:23:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:42.403 17:23:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:42.403 17:23:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:42.403 17:23:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:42.403 17:23:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:42.403 17:23:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:42.403 17:23:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:42.403 17:23:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:42.403 17:23:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:42.403 17:23:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:42.403 17:23:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:42.403 17:23:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:42.403 17:23:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:42.403 17:23:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:42.403 17:23:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:04:42.403 17:23:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:04:42.403 17:23:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:04:42.403 17:23:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:04:42.403 17:23:37 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:42.403 17:23:37 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:04:42.403 17:23:37 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:42.403 00:04:42.403 real 0m1.812s 00:04:42.403 user 0m1.242s 00:04:42.404 sys 0m0.584s 00:04:42.404 17:23:37 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.404 17:23:37 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:04:42.404 ************************************ 00:04:42.404 END TEST accel_fill 00:04:42.404 ************************************ 00:04:42.404 17:23:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:42.404 17:23:37 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:04:42.404 17:23:37 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:42.404 17:23:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.404 17:23:37 accel -- common/autotest_common.sh@10 -- # set +x 00:04:42.404 ************************************ 00:04:42.404 START TEST accel_copy_crc32c 00:04:42.404 ************************************ 00:04:42.404 17:23:37 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:04:42.404 17:23:37 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:04:42.404 17:23:37 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:04:42.404 17:23:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:42.404 17:23:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:42.404 17:23:37 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:04:42.404 17:23:37 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.yYyNKV -t 1 -w copy_crc32c -y 00:04:42.404 [2024-07-15 17:23:37.917600] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:04:42.404 [2024-07-15 17:23:37.917890] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:42.663 EAL: TSC is not safe to use in SMP mode 00:04:42.663 EAL: TSC is not invariant 00:04:42.663 [2024-07-15 17:23:38.468806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.919 [2024-07-15 17:23:38.563202] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:04:42.919 [2024-07-15 17:23:38.572389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:42.919 17:23:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:44.290 17:23:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:44.290 17:23:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:44.290 17:23:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:44.290 17:23:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:44.290 17:23:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:44.290 17:23:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:44.290 17:23:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:44.290 17:23:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:44.290 17:23:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:44.290 17:23:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:44.290 17:23:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:44.291 17:23:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:44.291 17:23:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:44.291 17:23:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:44.291 17:23:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:44.291 17:23:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:44.291 17:23:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:44.291 17:23:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:44.291 17:23:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:44.291 17:23:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:44.291 17:23:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:04:44.291 17:23:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:44.291 17:23:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:44.291 17:23:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:44.291 17:23:39 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:44.291 17:23:39 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:04:44.291 17:23:39 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:44.291 00:04:44.291 real 0m1.817s 00:04:44.291 user 0m1.228s 00:04:44.291 sys 0m0.601s 00:04:44.291 17:23:39 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.291 17:23:39 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:04:44.291 ************************************ 00:04:44.291 END TEST accel_copy_crc32c 00:04:44.291 ************************************ 00:04:44.291 17:23:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:44.291 17:23:39 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:04:44.291 17:23:39 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:04:44.291 17:23:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.291 17:23:39 accel -- common/autotest_common.sh@10 -- # set +x 00:04:44.291 ************************************ 00:04:44.291 START TEST accel_copy_crc32c_C2 00:04:44.291 ************************************ 00:04:44.291 17:23:39 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:04:44.291 17:23:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:04:44.291 17:23:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:04:44.291 17:23:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:44.291 17:23:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:44.291 17:23:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:04:44.291 17:23:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.zwp2Jq -t 1 -w copy_crc32c -y -C 2 00:04:44.291 [2024-07-15 17:23:39.775516] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:04:44.291 [2024-07-15 17:23:39.775784] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:44.548 EAL: TSC is not safe to use in SMP mode 00:04:44.548 EAL: TSC is not invariant 00:04:44.548 [2024-07-15 17:23:40.311682] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.806 [2024-07-15 17:23:40.412017] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:04:44.806 [2024-07-15 17:23:40.420928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:44.806 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:44.807 17:23:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:46.181 17:23:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:46.181 17:23:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:46.181 17:23:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:46.181 17:23:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:46.181 17:23:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:46.181 17:23:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:46.181 17:23:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:46.181 17:23:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:46.181 17:23:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:46.181 17:23:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:46.181 17:23:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:46.181 17:23:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:46.181 17:23:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:46.181 17:23:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:46.181 17:23:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:46.181 17:23:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:46.181 17:23:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:46.181 17:23:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:46.181 17:23:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:46.181 17:23:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:46.181 17:23:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:04:46.181 17:23:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:04:46.181 17:23:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:46.181 17:23:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:46.181 17:23:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:46.181 17:23:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:04:46.181 17:23:41 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:46.181 00:04:46.181 real 0m1.810s 00:04:46.181 user 0m1.227s 00:04:46.181 sys 0m0.593s 00:04:46.181 17:23:41 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.181 17:23:41 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:04:46.181 ************************************ 00:04:46.181 END TEST accel_copy_crc32c_C2 00:04:46.181 ************************************ 00:04:46.181 17:23:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:46.181 17:23:41 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:04:46.181 17:23:41 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:46.181 17:23:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.181 17:23:41 accel -- common/autotest_common.sh@10 -- # set +x 00:04:46.181 ************************************ 00:04:46.181 START TEST accel_dualcast 00:04:46.181 ************************************ 00:04:46.181 17:23:41 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:04:46.181 17:23:41 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:04:46.181 17:23:41 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:04:46.181 17:23:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:46.181 17:23:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:46.181 17:23:41 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:04:46.181 17:23:41 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.jBl3zM -t 1 -w dualcast -y 00:04:46.181 [2024-07-15 17:23:41.620659] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:04:46.181 [2024-07-15 17:23:41.620856] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:46.438 EAL: TSC is not safe to use in SMP mode 00:04:46.438 EAL: TSC is not invariant 00:04:46.438 [2024-07-15 17:23:42.161249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.438 [2024-07-15 17:23:42.263718] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:46.438 17:23:42 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:04:46.438 17:23:42 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:46.438 17:23:42 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:46.438 17:23:42 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:46.438 17:23:42 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:46.438 17:23:42 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:46.438 17:23:42 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:04:46.438 17:23:42 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:04:46.695 [2024-07-15 17:23:42.272598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.695 17:23:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:46.695 17:23:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:46.695 17:23:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:46.695 17:23:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:46.695 17:23:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:46.695 17:23:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:46.695 17:23:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:46.695 17:23:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:46.695 17:23:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:04:46.695 17:23:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:46.695 17:23:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:46.695 17:23:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:46.695 17:23:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:46.695 17:23:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:46.695 17:23:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:46.695 17:23:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:46.695 17:23:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:46.695 17:23:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:46.696 17:23:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:47.628 17:23:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:47.628 17:23:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:47.628 17:23:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:47.628 17:23:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:47.628 17:23:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:47.628 17:23:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:47.628 17:23:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:47.628 17:23:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:47.628 17:23:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:47.628 17:23:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:47.628 17:23:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:47.628 17:23:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:47.628 17:23:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:47.628 17:23:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:47.628 17:23:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:47.628 17:23:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:47.628 17:23:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:47.628 17:23:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:47.628 17:23:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:47.628 17:23:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:47.628 17:23:43 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:04:47.628 17:23:43 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:04:47.628 17:23:43 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:04:47.628 17:23:43 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:04:47.628 17:23:43 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:47.628 17:23:43 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:04:47.628 17:23:43 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:47.628 00:04:47.628 real 0m1.818s 00:04:47.628 user 0m1.256s 00:04:47.628 sys 0m0.571s 00:04:47.628 17:23:43 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.628 17:23:43 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:04:47.628 ************************************ 00:04:47.628 END TEST accel_dualcast 00:04:47.628 ************************************ 00:04:47.886 17:23:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:47.886 17:23:43 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:04:47.886 17:23:43 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:47.886 17:23:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.886 17:23:43 accel -- common/autotest_common.sh@10 -- # set +x 00:04:47.886 ************************************ 00:04:47.886 START TEST accel_compare 00:04:47.886 ************************************ 00:04:47.886 17:23:43 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:04:47.886 17:23:43 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:04:47.886 17:23:43 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:04:47.886 17:23:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:47.886 17:23:43 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:04:47.886 17:23:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:47.886 17:23:43 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.sHnvn0 -t 1 -w compare -y 00:04:47.886 [2024-07-15 17:23:43.478222] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:04:47.886 [2024-07-15 17:23:43.478431] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:48.452 EAL: TSC is not safe to use in SMP mode 00:04:48.452 EAL: TSC is not invariant 00:04:48.452 [2024-07-15 17:23:44.009779] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.452 [2024-07-15 17:23:44.100692] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:04:48.452 [2024-07-15 17:23:44.110430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:48.452 17:23:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:48.453 17:23:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:48.453 17:23:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:48.453 17:23:44 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:04:48.453 17:23:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:48.453 17:23:44 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:04:48.453 17:23:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:48.453 17:23:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:48.453 17:23:44 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:04:48.453 17:23:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:48.453 17:23:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:48.453 17:23:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:48.453 17:23:44 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:04:48.453 17:23:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:48.453 17:23:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:48.453 17:23:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:48.453 17:23:44 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:04:48.453 17:23:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:48.453 17:23:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:48.453 17:23:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:48.453 17:23:44 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:04:48.453 17:23:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:48.453 17:23:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:48.453 17:23:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:48.453 17:23:44 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:04:48.453 17:23:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:48.453 17:23:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:48.453 17:23:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:48.453 17:23:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:48.453 17:23:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:48.453 17:23:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:48.453 17:23:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:48.453 17:23:44 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:48.453 17:23:44 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:48.453 17:23:44 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:48.453 17:23:44 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:49.826 17:23:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:49.826 17:23:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:49.826 17:23:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:49.826 17:23:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:49.826 17:23:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:49.826 17:23:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:49.826 17:23:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:49.826 17:23:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:49.826 17:23:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:49.826 17:23:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:49.826 17:23:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:49.826 17:23:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:49.826 17:23:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:49.826 17:23:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:49.826 17:23:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:49.826 17:23:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:49.826 17:23:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:49.826 17:23:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:49.826 17:23:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:49.826 17:23:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:49.826 17:23:45 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:04:49.826 17:23:45 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:04:49.826 17:23:45 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:04:49.826 17:23:45 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:04:49.826 17:23:45 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:49.826 17:23:45 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:04:49.826 17:23:45 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:49.826 00:04:49.826 real 0m1.791s 00:04:49.826 user 0m1.244s 00:04:49.826 sys 0m0.559s 00:04:49.826 17:23:45 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.826 17:23:45 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:04:49.826 ************************************ 00:04:49.826 END TEST accel_compare 00:04:49.826 ************************************ 00:04:49.826 17:23:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:49.826 17:23:45 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:04:49.826 17:23:45 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:49.826 17:23:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.826 17:23:45 accel -- common/autotest_common.sh@10 -- # set +x 00:04:49.826 ************************************ 00:04:49.826 START TEST accel_xor 00:04:49.826 ************************************ 00:04:49.826 17:23:45 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:04:49.826 17:23:45 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:04:49.826 17:23:45 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:04:49.826 17:23:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:49.826 17:23:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:49.826 17:23:45 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:04:49.826 17:23:45 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.iTSSf8 -t 1 -w xor -y 00:04:49.826 [2024-07-15 17:23:45.308686] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:04:49.826 [2024-07-15 17:23:45.308874] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:50.084 EAL: TSC is not safe to use in SMP mode 00:04:50.084 EAL: TSC is not invariant 00:04:50.084 [2024-07-15 17:23:45.841916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.342 [2024-07-15 17:23:45.956907] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:04:50.342 [2024-07-15 17:23:45.966325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:50.342 17:23:45 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:04:50.343 17:23:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:50.343 17:23:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:50.343 17:23:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:50.343 17:23:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:50.343 17:23:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:50.343 17:23:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:50.343 17:23:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:50.343 17:23:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:50.343 17:23:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:50.343 17:23:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:50.343 17:23:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:51.714 17:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:51.714 17:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:51.714 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:51.714 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:51.714 17:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:51.714 17:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:51.714 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:51.714 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:51.714 17:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:51.714 17:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:51.714 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:51.714 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:51.714 17:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:51.714 17:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:51.714 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:51.714 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:51.714 17:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:51.714 17:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:51.714 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:51.714 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:51.714 17:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:51.714 17:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:51.714 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:51.714 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:51.714 17:23:47 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:51.714 17:23:47 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:04:51.714 17:23:47 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:51.714 00:04:51.714 real 0m1.822s 00:04:51.714 user 0m1.247s 00:04:51.714 sys 0m0.583s 00:04:51.714 17:23:47 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.714 17:23:47 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:04:51.714 ************************************ 00:04:51.714 END TEST accel_xor 00:04:51.714 ************************************ 00:04:51.714 17:23:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:51.714 17:23:47 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:04:51.714 17:23:47 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:04:51.714 17:23:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.714 17:23:47 accel -- common/autotest_common.sh@10 -- # set +x 00:04:51.714 ************************************ 00:04:51.714 START TEST accel_xor 00:04:51.714 ************************************ 00:04:51.714 17:23:47 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:04:51.714 17:23:47 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:04:51.714 17:23:47 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:04:51.714 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:51.714 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:51.714 17:23:47 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:04:51.714 17:23:47 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.aPbuVm -t 1 -w xor -y -x 3 00:04:51.714 [2024-07-15 17:23:47.160233] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:04:51.714 [2024-07-15 17:23:47.160478] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:51.973 EAL: TSC is not safe to use in SMP mode 00:04:51.973 EAL: TSC is not invariant 00:04:51.973 [2024-07-15 17:23:47.669245] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.973 [2024-07-15 17:23:47.757227] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:04:51.973 [2024-07-15 17:23:47.765367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:04:51.973 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:51.974 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:51.974 17:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:04:51.974 17:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:51.974 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:51.974 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:51.974 17:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:04:51.974 17:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:51.974 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:51.974 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:51.974 17:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:04:51.974 17:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:51.974 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:51.974 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:51.974 17:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:04:51.974 17:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:51.974 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:51.974 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:51.974 17:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:04:51.974 17:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:51.974 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:51.974 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:51.974 17:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:51.974 17:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:51.974 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:51.974 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:51.974 17:23:47 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:51.974 17:23:47 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:51.974 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:51.974 17:23:47 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:53.406 17:23:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:53.407 17:23:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:53.407 17:23:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:53.407 17:23:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:53.407 17:23:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:53.407 17:23:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:53.407 17:23:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:53.407 17:23:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:53.407 17:23:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:53.407 17:23:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:53.407 17:23:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:53.407 17:23:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:53.407 17:23:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:53.407 17:23:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:53.407 17:23:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:53.407 17:23:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:53.407 17:23:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:53.407 17:23:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:53.407 17:23:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:53.407 17:23:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:53.407 17:23:48 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:04:53.407 17:23:48 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:04:53.407 17:23:48 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:04:53.407 17:23:48 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:04:53.407 17:23:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:53.407 17:23:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:04:53.407 17:23:48 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:53.407 00:04:53.407 real 0m1.767s 00:04:53.407 user 0m1.225s 00:04:53.407 sys 0m0.551s 00:04:53.407 ************************************ 00:04:53.407 17:23:48 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.407 17:23:48 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:04:53.407 END TEST accel_xor 00:04:53.407 ************************************ 00:04:53.407 17:23:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:53.407 17:23:48 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:04:53.407 17:23:48 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:53.407 17:23:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.407 17:23:48 accel -- common/autotest_common.sh@10 -- # set +x 00:04:53.407 ************************************ 00:04:53.407 START TEST accel_dif_verify 00:04:53.407 ************************************ 00:04:53.407 17:23:48 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:04:53.407 17:23:48 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:04:53.407 17:23:48 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:04:53.407 17:23:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:53.407 17:23:48 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:53.407 17:23:48 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:04:53.407 17:23:48 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.DVw4YN -t 1 -w dif_verify 00:04:53.407 [2024-07-15 17:23:48.960591] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:04:53.407 [2024-07-15 17:23:48.960820] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:53.971 EAL: TSC is not safe to use in SMP mode 00:04:53.971 EAL: TSC is not invariant 00:04:53.971 [2024-07-15 17:23:49.510524] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:53.971 [2024-07-15 17:23:49.613982] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:04:53.971 [2024-07-15 17:23:49.624500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:53.971 17:23:49 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:55.344 17:23:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:55.344 17:23:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:55.344 17:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:55.344 17:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:55.344 17:23:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:55.344 17:23:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:55.344 17:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:55.344 17:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:55.344 17:23:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:55.344 17:23:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:55.344 17:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:55.344 17:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:55.344 17:23:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:55.344 17:23:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:55.344 17:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:55.344 17:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:55.344 17:23:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:55.344 17:23:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:55.344 17:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:55.344 17:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:55.344 17:23:50 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:04:55.344 17:23:50 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:04:55.344 17:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:04:55.344 17:23:50 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:04:55.344 17:23:50 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:55.344 17:23:50 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:04:55.344 17:23:50 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:55.344 00:04:55.344 real 0m1.828s 00:04:55.344 user 0m1.240s 00:04:55.344 sys 0m0.585s 00:04:55.344 ************************************ 00:04:55.344 END TEST accel_dif_verify 00:04:55.344 17:23:50 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.344 17:23:50 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:04:55.344 ************************************ 00:04:55.344 17:23:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:55.344 17:23:50 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:04:55.344 17:23:50 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:55.344 17:23:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.344 17:23:50 accel -- common/autotest_common.sh@10 -- # set +x 00:04:55.344 ************************************ 00:04:55.344 START TEST accel_dif_generate 00:04:55.344 ************************************ 00:04:55.344 17:23:50 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:04:55.344 17:23:50 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:04:55.344 17:23:50 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:04:55.344 17:23:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:55.344 17:23:50 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:55.344 17:23:50 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:04:55.344 17:23:50 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.U3r22e -t 1 -w dif_generate 00:04:55.344 [2024-07-15 17:23:50.824991] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:04:55.344 [2024-07-15 17:23:50.825167] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:55.601 EAL: TSC is not safe to use in SMP mode 00:04:55.601 EAL: TSC is not invariant 00:04:55.601 [2024-07-15 17:23:51.334056] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.601 [2024-07-15 17:23:51.421002] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:55.601 17:23:51 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:04:55.601 17:23:51 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:55.601 17:23:51 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:55.601 17:23:51 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:55.601 17:23:51 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:55.601 17:23:51 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:55.601 17:23:51 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:04:55.601 17:23:51 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:04:55.601 [2024-07-15 17:23:51.429867] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:55.858 17:23:51 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:56.790 17:23:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:56.790 17:23:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:56.790 17:23:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:56.790 17:23:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:56.790 17:23:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:56.790 17:23:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:56.790 17:23:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:56.790 17:23:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:56.790 17:23:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:56.790 17:23:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:56.790 17:23:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:56.790 17:23:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:56.790 17:23:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:56.790 17:23:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:56.790 17:23:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:56.790 17:23:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:56.790 17:23:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:56.790 17:23:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:56.790 17:23:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:56.790 17:23:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:56.790 17:23:52 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:04:56.790 17:23:52 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:04:56.790 17:23:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:04:56.790 17:23:52 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:04:56.790 17:23:52 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:56.790 17:23:52 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:04:56.790 17:23:52 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:56.790 00:04:56.790 real 0m1.773s 00:04:56.790 user 0m1.226s 00:04:56.790 sys 0m0.560s 00:04:56.790 17:23:52 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.790 17:23:52 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:04:56.790 ************************************ 00:04:56.790 END TEST accel_dif_generate 00:04:56.790 ************************************ 00:04:56.790 17:23:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:56.790 17:23:52 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:04:56.790 17:23:52 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:56.790 17:23:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.790 17:23:52 accel -- common/autotest_common.sh@10 -- # set +x 00:04:57.047 ************************************ 00:04:57.047 START TEST accel_dif_generate_copy 00:04:57.047 ************************************ 00:04:57.047 17:23:52 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:04:57.047 17:23:52 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:04:57.047 17:23:52 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:04:57.047 17:23:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.047 17:23:52 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.047 17:23:52 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:04:57.047 17:23:52 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.Cxs40p -t 1 -w dif_generate_copy 00:04:57.047 [2024-07-15 17:23:52.633394] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:04:57.047 [2024-07-15 17:23:52.633556] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:57.623 EAL: TSC is not safe to use in SMP mode 00:04:57.623 EAL: TSC is not invariant 00:04:57.623 [2024-07-15 17:23:53.170719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.623 [2024-07-15 17:23:53.262387] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:04:57.623 [2024-07-15 17:23:53.272804] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:57.623 17:23:53 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:58.999 17:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:58.999 17:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:58.999 17:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:58.999 17:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:58.999 17:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:58.999 17:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:58.999 17:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:58.999 17:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:58.999 17:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:58.999 17:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:58.999 17:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:58.999 17:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:58.999 17:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:58.999 17:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:58.999 17:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:58.999 17:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:58.999 17:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:58.999 17:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:58.999 17:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:58.999 17:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:58.999 17:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:04:58.999 17:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:04:58.999 17:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:04:58.999 17:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:04:58.999 17:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:58.999 17:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:04:58.999 17:23:54 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:58.999 00:04:58.999 real 0m1.802s 00:04:58.999 user 0m1.231s 00:04:58.999 sys 0m0.582s 00:04:58.999 17:23:54 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.999 17:23:54 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:04:58.999 ************************************ 00:04:58.999 END TEST accel_dif_generate_copy 00:04:58.999 ************************************ 00:04:58.999 17:23:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:58.999 17:23:54 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:04:58.999 17:23:54 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:58.999 17:23:54 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:04:58.999 17:23:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.999 17:23:54 accel -- common/autotest_common.sh@10 -- # set +x 00:04:58.999 ************************************ 00:04:58.999 START TEST accel_comp 00:04:58.999 ************************************ 00:04:58.999 17:23:54 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:58.999 17:23:54 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:04:58.999 17:23:54 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:04:58.999 17:23:54 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:58.999 17:23:54 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:58.999 17:23:54 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:58.999 17:23:54 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.tTN5Yu -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:58.999 [2024-07-15 17:23:54.474651] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:04:58.999 [2024-07-15 17:23:54.474890] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:59.258 EAL: TSC is not safe to use in SMP mode 00:04:59.258 EAL: TSC is not invariant 00:04:59.258 [2024-07-15 17:23:54.983887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.516 [2024-07-15 17:23:55.096048] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:59.516 17:23:55 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:04:59.516 17:23:55 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:59.516 17:23:55 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:59.516 17:23:55 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:59.516 17:23:55 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:59.516 17:23:55 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:59.516 17:23:55 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:04:59.516 17:23:55 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:04:59.516 [2024-07-15 17:23:55.106836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.516 17:23:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:04:59.516 17:23:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:59.516 17:23:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:59.516 17:23:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:59.516 17:23:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:04:59.516 17:23:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:59.516 17:23:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:04:59.517 17:23:55 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:00.483 17:23:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:00.483 17:23:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:00.483 17:23:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:00.483 17:23:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:00.483 17:23:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:00.483 17:23:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:00.483 17:23:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:00.483 17:23:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:00.483 17:23:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:00.483 17:23:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:00.483 17:23:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:00.483 17:23:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:00.483 17:23:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:00.483 17:23:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:00.483 17:23:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:00.483 17:23:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:00.483 17:23:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:00.483 17:23:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:00.483 17:23:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:00.483 17:23:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:00.483 17:23:56 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:00.483 17:23:56 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:00.483 17:23:56 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:00.483 17:23:56 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:00.483 17:23:56 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:00.483 17:23:56 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:00.483 17:23:56 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:00.483 00:05:00.483 real 0m1.799s 00:05:00.483 user 0m1.270s 00:05:00.483 sys 0m0.535s 00:05:00.483 17:23:56 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.483 ************************************ 00:05:00.483 END TEST accel_comp 00:05:00.483 ************************************ 00:05:00.483 17:23:56 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:00.483 17:23:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:00.483 17:23:56 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:00.483 17:23:56 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:00.483 17:23:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.483 17:23:56 accel -- common/autotest_common.sh@10 -- # set +x 00:05:00.483 ************************************ 00:05:00.483 START TEST accel_decomp 00:05:00.483 ************************************ 00:05:00.483 17:23:56 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:00.483 17:23:56 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:00.483 17:23:56 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:00.483 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:00.483 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:00.483 17:23:56 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:00.483 17:23:56 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.f06xWi -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:00.483 [2024-07-15 17:23:56.305214] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:05:00.483 [2024-07-15 17:23:56.305408] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:01.050 EAL: TSC is not safe to use in SMP mode 00:05:01.050 EAL: TSC is not invariant 00:05:01.050 [2024-07-15 17:23:56.822730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.308 [2024-07-15 17:23:56.922699] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:01.308 17:23:56 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:01.308 17:23:56 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:01.308 17:23:56 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:01.308 17:23:56 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:01.308 17:23:56 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:01.308 17:23:56 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:01.308 17:23:56 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:01.308 17:23:56 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:01.308 [2024-07-15 17:23:56.933404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.308 17:23:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:01.308 17:23:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:01.308 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:01.308 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:01.308 17:23:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:01.308 17:23:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:01.308 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:01.309 17:23:56 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:02.684 17:23:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:02.684 17:23:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:02.684 17:23:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:02.684 17:23:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:02.684 17:23:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:02.684 17:23:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:02.684 17:23:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:02.684 17:23:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:02.684 17:23:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:02.684 17:23:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:02.684 17:23:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:02.684 17:23:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:02.684 17:23:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:02.684 17:23:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:02.684 17:23:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:02.684 17:23:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:02.684 17:23:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:02.684 17:23:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:02.684 17:23:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:02.684 17:23:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:02.684 17:23:58 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:02.684 17:23:58 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:02.684 17:23:58 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:02.684 17:23:58 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:02.684 17:23:58 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:02.684 17:23:58 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:02.684 17:23:58 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:02.684 00:05:02.684 real 0m1.796s 00:05:02.684 user 0m1.237s 00:05:02.684 sys 0m0.565s 00:05:02.684 17:23:58 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:02.684 17:23:58 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:02.684 ************************************ 00:05:02.684 END TEST accel_decomp 00:05:02.684 ************************************ 00:05:02.684 17:23:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:02.684 17:23:58 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:02.684 17:23:58 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:02.684 17:23:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.684 17:23:58 accel -- common/autotest_common.sh@10 -- # set +x 00:05:02.684 ************************************ 00:05:02.684 START TEST accel_decomp_full 00:05:02.684 ************************************ 00:05:02.684 17:23:58 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:02.684 17:23:58 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:02.684 17:23:58 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:02.684 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:02.684 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:02.684 17:23:58 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:02.684 17:23:58 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.Dkorkb -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:02.684 [2024-07-15 17:23:58.136651] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:05:02.684 [2024-07-15 17:23:58.136903] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:02.942 EAL: TSC is not safe to use in SMP mode 00:05:02.942 EAL: TSC is not invariant 00:05:02.942 [2024-07-15 17:23:58.678968] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.200 [2024-07-15 17:23:58.783470] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:03.200 17:23:58 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:03.200 17:23:58 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:03.200 17:23:58 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:03.200 17:23:58 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:03.200 17:23:58 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:03.200 17:23:58 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:03.200 17:23:58 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:03.200 17:23:58 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:03.200 [2024-07-15 17:23:58.793464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.200 17:23:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:03.200 17:23:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:03.200 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:03.200 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:03.200 17:23:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:03.200 17:23:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:03.200 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:03.200 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:03.200 17:23:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:03.200 17:23:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:03.200 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:03.200 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:03.200 17:23:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:03.200 17:23:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:03.200 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:03.200 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:03.200 17:23:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:03.200 17:23:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:03.200 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:03.200 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:03.200 17:23:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:03.200 17:23:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:03.200 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:03.200 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:03.200 17:23:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:03.200 17:23:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:03.201 17:23:58 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:04.135 17:23:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:04.135 17:23:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:04.135 17:23:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:04.135 17:23:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:04.135 17:23:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:04.135 17:23:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:04.135 17:23:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:04.135 17:23:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:04.135 17:23:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:04.135 17:23:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:04.135 17:23:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:04.135 17:23:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:04.135 17:23:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:04.135 17:23:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:04.135 17:23:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:04.135 17:23:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:04.135 17:23:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:04.135 17:23:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:04.135 17:23:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:04.135 17:23:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:04.135 17:23:59 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:04.135 17:23:59 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:04.135 17:23:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:04.135 17:23:59 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:04.135 17:23:59 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:04.135 17:23:59 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:04.135 17:23:59 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:04.135 00:05:04.135 real 0m1.831s 00:05:04.135 user 0m1.252s 00:05:04.135 sys 0m0.591s 00:05:04.135 17:23:59 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.135 17:23:59 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:04.135 ************************************ 00:05:04.135 END TEST accel_decomp_full 00:05:04.135 ************************************ 00:05:04.393 17:23:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:04.393 17:23:59 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:04.393 17:23:59 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:04.393 17:23:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.393 17:23:59 accel -- common/autotest_common.sh@10 -- # set +x 00:05:04.393 ************************************ 00:05:04.393 START TEST accel_decomp_mcore 00:05:04.393 ************************************ 00:05:04.393 17:23:59 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:04.393 17:23:59 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:04.393 17:23:59 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:04.393 17:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:04.393 17:23:59 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:04.393 17:23:59 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:04.393 17:23:59 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.ROmrGQ -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:04.393 [2024-07-15 17:24:00.005906] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:05:04.393 [2024-07-15 17:24:00.006130] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:04.958 EAL: TSC is not safe to use in SMP mode 00:05:04.958 EAL: TSC is not invariant 00:05:04.958 [2024-07-15 17:24:00.522214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:04.958 [2024-07-15 17:24:00.609779] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:04.958 [2024-07-15 17:24:00.609846] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:05:04.958 [2024-07-15 17:24:00.609856] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:05:04.958 [2024-07-15 17:24:00.609864] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:05:04.958 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:04.958 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:04.958 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:04.958 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:04.958 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:04.958 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:04.959 [2024-07-15 17:24:00.621696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.959 [2024-07-15 17:24:00.621569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.959 [2024-07-15 17:24:00.621684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:04.959 [2024-07-15 17:24:00.621616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:04.959 17:24:00 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:06.329 00:05:06.329 real 0m1.790s 00:05:06.329 user 0m4.376s 00:05:06.329 sys 0m0.545s 00:05:06.329 ************************************ 00:05:06.329 END TEST accel_decomp_mcore 00:05:06.329 ************************************ 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.329 17:24:01 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:06.329 17:24:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:06.329 17:24:01 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:06.329 17:24:01 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:06.329 17:24:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.329 17:24:01 accel -- common/autotest_common.sh@10 -- # set +x 00:05:06.329 ************************************ 00:05:06.329 START TEST accel_decomp_full_mcore 00:05:06.329 ************************************ 00:05:06.329 17:24:01 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:06.329 17:24:01 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:06.329 17:24:01 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:06.329 17:24:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:06.329 17:24:01 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:06.329 17:24:01 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:06.329 17:24:01 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.WXgtwk -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:06.329 [2024-07-15 17:24:01.829556] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:05:06.329 [2024-07-15 17:24:01.829814] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:06.587 EAL: TSC is not safe to use in SMP mode 00:05:06.587 EAL: TSC is not invariant 00:05:06.587 [2024-07-15 17:24:02.343795] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:06.845 [2024-07-15 17:24:02.441799] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:06.845 [2024-07-15 17:24:02.441878] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:05:06.845 [2024-07-15 17:24:02.441896] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:05:06.845 [2024-07-15 17:24:02.441912] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:06.845 [2024-07-15 17:24:02.452913] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.845 [2024-07-15 17:24:02.452798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.845 [2024-07-15 17:24:02.452859] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:06.845 [2024-07-15 17:24:02.452903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:06.845 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:06.846 17:24:02 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:07.834 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:07.834 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:07.834 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:07.834 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:07.834 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:07.834 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:07.834 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:07.834 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:07.835 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:07.835 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:07.835 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:07.835 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:07.835 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:07.835 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:07.835 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:07.835 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:07.835 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:07.835 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:07.835 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:07.835 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:07.835 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:07.835 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:07.835 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:07.835 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:07.835 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:07.835 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:07.835 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:07.835 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:07.835 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:07.835 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:07.835 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:07.835 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:07.835 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:07.835 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:07.835 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:07.835 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:07.835 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:07.835 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:07.835 17:24:03 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:07.835 00:05:07.835 real 0m1.804s 00:05:07.835 user 0m4.409s 00:05:07.835 sys 0m0.554s 00:05:07.835 17:24:03 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.835 17:24:03 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:07.835 ************************************ 00:05:07.835 END TEST accel_decomp_full_mcore 00:05:07.835 ************************************ 00:05:07.835 17:24:03 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:07.835 17:24:03 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:07.835 17:24:03 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:07.835 17:24:03 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.835 17:24:03 accel -- common/autotest_common.sh@10 -- # set +x 00:05:07.835 ************************************ 00:05:07.835 START TEST accel_decomp_mthread 00:05:07.835 ************************************ 00:05:07.835 17:24:03 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:07.835 17:24:03 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:07.835 17:24:03 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:07.835 17:24:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:07.835 17:24:03 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:07.835 17:24:03 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:07.835 17:24:03 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.H6rJO9 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:07.835 [2024-07-15 17:24:03.664669] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:05:07.835 [2024-07-15 17:24:03.664867] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:08.401 EAL: TSC is not safe to use in SMP mode 00:05:08.401 EAL: TSC is not invariant 00:05:08.401 [2024-07-15 17:24:04.202792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.659 [2024-07-15 17:24:04.290560] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:08.659 [2024-07-15 17:24:04.299629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:08.659 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:08.660 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:08.660 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:08.660 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:08.660 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:08.660 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:08.660 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:08.660 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:08.660 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:08.660 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:08.660 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:08.660 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:08.660 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:08.660 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:08.660 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:08.660 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:08.660 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:08.660 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:08.660 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:08.660 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:08.660 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:08.660 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:08.660 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:08.660 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:08.660 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:08.660 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:08.660 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:08.660 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:08.660 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:08.660 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:08.660 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:08.660 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:08.660 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:08.660 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:08.660 17:24:04 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:10.033 17:24:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:10.033 17:24:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:10.033 17:24:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:10.033 17:24:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:10.033 17:24:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:10.033 17:24:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:10.033 17:24:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:10.033 17:24:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:10.033 17:24:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:10.033 17:24:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:10.033 17:24:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:10.033 17:24:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:10.033 17:24:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:10.033 17:24:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:10.033 17:24:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:10.033 17:24:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:10.033 17:24:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:10.033 17:24:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:10.033 17:24:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:10.033 17:24:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:10.033 17:24:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:10.033 17:24:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:10.033 17:24:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:10.033 17:24:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:10.033 17:24:05 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:10.033 17:24:05 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:10.033 17:24:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:10.033 17:24:05 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:10.033 17:24:05 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:10.033 17:24:05 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:10.033 17:24:05 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:10.033 00:05:10.033 real 0m1.806s 00:05:10.033 user 0m1.241s 00:05:10.033 sys 0m0.577s 00:05:10.033 17:24:05 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.033 17:24:05 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:10.033 ************************************ 00:05:10.033 END TEST accel_decomp_mthread 00:05:10.033 ************************************ 00:05:10.033 17:24:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:10.033 17:24:05 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:10.033 17:24:05 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:10.033 17:24:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.033 17:24:05 accel -- common/autotest_common.sh@10 -- # set +x 00:05:10.033 ************************************ 00:05:10.033 START TEST accel_decomp_full_mthread 00:05:10.033 ************************************ 00:05:10.033 17:24:05 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:10.033 17:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:10.033 17:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:10.033 17:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:10.033 17:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:10.033 17:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:10.033 17:24:05 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.ctIPjd -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:10.033 [2024-07-15 17:24:05.505597] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:05:10.033 [2024-07-15 17:24:05.505798] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:10.290 EAL: TSC is not safe to use in SMP mode 00:05:10.290 EAL: TSC is not invariant 00:05:10.290 [2024-07-15 17:24:06.046695] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.547 [2024-07-15 17:24:06.139665] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:10.547 [2024-07-15 17:24:06.149809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:10.547 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:10.548 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:10.548 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:10.548 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:10.548 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:10.548 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:10.548 17:24:06 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:11.928 17:24:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:11.928 17:24:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:11.928 17:24:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:11.928 17:24:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:11.928 17:24:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:11.928 17:24:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:11.928 17:24:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:11.928 17:24:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:11.928 17:24:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:11.928 17:24:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:11.928 17:24:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:11.928 17:24:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:11.928 17:24:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:11.928 17:24:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:11.928 17:24:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:11.928 17:24:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:11.928 17:24:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:11.928 17:24:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:11.928 17:24:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:11.928 17:24:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:11.928 17:24:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:11.928 17:24:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:11.928 17:24:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:11.928 17:24:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:11.928 17:24:07 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:11.928 17:24:07 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:11.928 17:24:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:11.928 17:24:07 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:11.928 17:24:07 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:11.928 17:24:07 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:11.928 17:24:07 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:11.928 00:05:11.928 real 0m1.843s 00:05:11.928 user 0m1.270s 00:05:11.928 sys 0m0.581s 00:05:11.928 17:24:07 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:11.928 ************************************ 00:05:11.928 END TEST accel_decomp_full_mthread 00:05:11.928 17:24:07 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:11.928 ************************************ 00:05:11.928 17:24:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:11.928 17:24:07 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:11.928 17:24:07 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /tmp//sh-np.pOUt2B 00:05:11.928 17:24:07 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:11.928 17:24:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:11.928 17:24:07 accel -- common/autotest_common.sh@10 -- # set +x 00:05:11.928 ************************************ 00:05:11.928 START TEST accel_dif_functional_tests 00:05:11.928 ************************************ 00:05:11.928 17:24:07 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /tmp//sh-np.pOUt2B 00:05:11.928 [2024-07-15 17:24:07.381212] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:05:11.928 [2024-07-15 17:24:07.381406] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:12.185 EAL: TSC is not safe to use in SMP mode 00:05:12.185 EAL: TSC is not invariant 00:05:12.185 [2024-07-15 17:24:07.902573] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:12.185 [2024-07-15 17:24:08.003240] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:12.185 [2024-07-15 17:24:08.003300] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:05:12.185 [2024-07-15 17:24:08.003314] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:05:12.185 17:24:08 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:12.185 17:24:08 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:12.185 17:24:08 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:12.185 17:24:08 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:12.185 17:24:08 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:12.185 17:24:08 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:12.185 17:24:08 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:12.185 17:24:08 accel -- accel/accel.sh@41 -- # jq -r . 00:05:12.185 [2024-07-15 17:24:08.013692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.185 [2024-07-15 17:24:08.013640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:12.185 [2024-07-15 17:24:08.013683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:12.443 00:05:12.443 00:05:12.443 CUnit - A unit testing framework for C - Version 2.1-3 00:05:12.443 http://cunit.sourceforge.net/ 00:05:12.443 00:05:12.443 00:05:12.443 Suite: accel_dif 00:05:12.443 Test: verify: DIF generated, GUARD check ...passed 00:05:12.443 Test: verify: DIF generated, APPTAG check ...passed 00:05:12.443 Test: verify: DIF generated, REFTAG check ...passed 00:05:12.443 Test: verify: DIF not generated, GUARD check ...[2024-07-15 17:24:08.031695] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:12.443 passed 00:05:12.443 Test: verify: DIF not generated, APPTAG check ...passed 00:05:12.443 Test: verify: DIF not generated, REFTAG check ...passed 00:05:12.443 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:12.444 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:05:12.444 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:12.444 Test: verify: REFTAG incorrect, REFTAG ignore ...[2024-07-15 17:24:08.031762] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:12.444 [2024-07-15 17:24:08.031788] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:12.444 [2024-07-15 17:24:08.031837] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:12.444 passed 00:05:12.444 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:12.444 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:05:12.444 Test: verify copy: DIF generated, GUARD check ...passed 00:05:12.444 Test: verify copy: DIF generated, APPTAG check ...passed 00:05:12.444 Test: verify copy: DIF generated, REFTAG check ...passed 00:05:12.444 Test: verify copy: DIF not generated, GUARD check ...passed 00:05:12.444 Test: verify copy: DIF not generated, APPTAG check ...passed 00:05:12.444 Test: verify copy: DIF not generated, REFTAG check ...passed 00:05:12.444 Test: generate copy: DIF generated, GUARD check ...passed 00:05:12.444 Test: generate copy: DIF generated, APTTAG check ...[2024-07-15 17:24:08.031905] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:12.444 [2024-07-15 17:24:08.031996] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:12.444 [2024-07-15 17:24:08.032020] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:12.444 [2024-07-15 17:24:08.032044] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:12.444 passed 00:05:12.444 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:12.444 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:12.444 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:12.444 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:05:12.444 Test: generate copy: iovecs-len validate ...passed 00:05:12.444 Test: generate copy: buffer alignment validate ...[2024-07-15 17:24:08.032182] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:12.444 passed 00:05:12.444 00:05:12.444 Run Summary: Type Total Ran Passed Failed Inactive 00:05:12.444 suites 1 1 n/a 0 0 00:05:12.444 tests 26 26 26 0 0 00:05:12.444 asserts 115 115 115 0 n/a 00:05:12.444 00:05:12.444 Elapsed time = 0.000 seconds 00:05:12.444 00:05:12.444 real 0m0.856s 00:05:12.444 user 0m0.435s 00:05:12.444 sys 0m0.584s 00:05:12.444 17:24:08 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.444 17:24:08 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:12.444 ************************************ 00:05:12.444 END TEST accel_dif_functional_tests 00:05:12.444 ************************************ 00:05:12.444 17:24:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:12.444 17:24:08 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:12.444 17:24:08 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:12.444 00:05:12.444 real 0m41.187s 00:05:12.444 user 0m33.802s 00:05:12.444 17:24:08 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:12.444 sys 0m14.393s 00:05:12.444 17:24:08 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:12.444 17:24:08 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:12.444 17:24:08 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:12.444 17:24:08 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:12.444 17:24:08 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:12.444 17:24:08 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:12.444 17:24:08 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:12.444 17:24:08 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.444 17:24:08 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:12.444 17:24:08 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:12.444 17:24:08 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:12.444 17:24:08 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:12.444 17:24:08 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:12.444 17:24:08 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:12.444 17:24:08 accel -- common/autotest_common.sh@10 -- # set +x 00:05:12.444 17:24:08 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:12.444 17:24:08 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:12.444 ************************************ 00:05:12.444 END TEST accel 00:05:12.444 ************************************ 00:05:12.444 17:24:08 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:12.444 17:24:08 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:12.444 17:24:08 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:12.444 17:24:08 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:12.444 17:24:08 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:12.444 17:24:08 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:12.701 17:24:08 -- common/autotest_common.sh@1142 -- # return 0 00:05:12.701 17:24:08 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:05:12.701 17:24:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.701 17:24:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.701 17:24:08 -- common/autotest_common.sh@10 -- # set +x 00:05:12.701 ************************************ 00:05:12.701 START TEST accel_rpc 00:05:12.701 ************************************ 00:05:12.701 17:24:08 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:05:12.701 * Looking for test storage... 00:05:12.701 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:12.701 17:24:08 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:12.701 17:24:08 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=47460 00:05:12.701 17:24:08 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 47460 00:05:12.701 17:24:08 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:12.702 17:24:08 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 47460 ']' 00:05:12.702 17:24:08 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.702 17:24:08 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.702 17:24:08 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.702 17:24:08 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.702 17:24:08 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.702 [2024-07-15 17:24:08.441221] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:05:12.702 [2024-07-15 17:24:08.441425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:13.267 EAL: TSC is not safe to use in SMP mode 00:05:13.267 EAL: TSC is not invariant 00:05:13.267 [2024-07-15 17:24:08.988751] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.267 [2024-07-15 17:24:09.079856] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:13.267 [2024-07-15 17:24:09.082064] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.832 17:24:09 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.832 17:24:09 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:13.832 17:24:09 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:13.832 17:24:09 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:13.832 17:24:09 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:13.832 17:24:09 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:13.832 17:24:09 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:13.832 17:24:09 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.832 17:24:09 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.832 17:24:09 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.832 ************************************ 00:05:13.832 START TEST accel_assign_opcode 00:05:13.832 ************************************ 00:05:13.832 17:24:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:05:13.832 17:24:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:13.832 17:24:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.832 17:24:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:13.832 [2024-07-15 17:24:09.614388] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:13.832 17:24:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.832 17:24:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:13.832 17:24:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.832 17:24:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:13.832 [2024-07-15 17:24:09.622380] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:13.832 17:24:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:13.832 17:24:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:13.832 17:24:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:13.832 17:24:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:14.090 17:24:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.090 17:24:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:14.090 17:24:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:14.090 17:24:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:14.090 17:24:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:14.090 17:24:09 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:14.090 17:24:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:14.090 software 00:05:14.090 00:05:14.090 real 0m0.072s 00:05:14.090 user 0m0.007s 00:05:14.090 sys 0m0.012s 00:05:14.090 17:24:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.090 17:24:09 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:14.090 ************************************ 00:05:14.090 END TEST accel_assign_opcode 00:05:14.090 ************************************ 00:05:14.090 17:24:09 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:14.090 17:24:09 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 47460 00:05:14.090 17:24:09 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 47460 ']' 00:05:14.090 17:24:09 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 47460 00:05:14.090 17:24:09 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:05:14.090 17:24:09 accel_rpc -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:05:14.090 17:24:09 accel_rpc -- common/autotest_common.sh@956 -- # ps -c -o command 47460 00:05:14.090 17:24:09 accel_rpc -- common/autotest_common.sh@956 -- # tail -1 00:05:14.090 17:24:09 accel_rpc -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:05:14.090 17:24:09 accel_rpc -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:05:14.090 killing process with pid 47460 00:05:14.090 17:24:09 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 47460' 00:05:14.090 17:24:09 accel_rpc -- common/autotest_common.sh@967 -- # kill 47460 00:05:14.090 17:24:09 accel_rpc -- common/autotest_common.sh@972 -- # wait 47460 00:05:14.347 00:05:14.347 real 0m1.697s 00:05:14.347 user 0m1.679s 00:05:14.347 sys 0m0.775s 00:05:14.347 17:24:09 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.347 17:24:09 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.347 ************************************ 00:05:14.347 END TEST accel_rpc 00:05:14.347 ************************************ 00:05:14.347 17:24:10 -- common/autotest_common.sh@1142 -- # return 0 00:05:14.347 17:24:10 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:14.347 17:24:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.347 17:24:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.347 17:24:10 -- common/autotest_common.sh@10 -- # set +x 00:05:14.347 ************************************ 00:05:14.347 START TEST app_cmdline 00:05:14.347 ************************************ 00:05:14.347 17:24:10 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:14.347 * Looking for test storage... 00:05:14.347 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:14.347 17:24:10 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:14.347 17:24:10 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=47538 00:05:14.347 17:24:10 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 47538 00:05:14.347 17:24:10 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 47538 ']' 00:05:14.347 17:24:10 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.347 17:24:10 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:14.347 17:24:10 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:14.347 17:24:10 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.347 17:24:10 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:14.347 17:24:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:14.347 [2024-07-15 17:24:10.157778] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:05:14.347 [2024-07-15 17:24:10.157916] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:14.980 EAL: TSC is not safe to use in SMP mode 00:05:14.980 EAL: TSC is not invariant 00:05:14.980 [2024-07-15 17:24:10.690642] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.980 [2024-07-15 17:24:10.792132] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:14.980 [2024-07-15 17:24:10.794754] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.544 17:24:11 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.544 17:24:11 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:05:15.544 17:24:11 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:15.802 { 00:05:15.802 "version": "SPDK v24.09-pre git sha1 455fda465", 00:05:15.802 "fields": { 00:05:15.802 "major": 24, 00:05:15.802 "minor": 9, 00:05:15.802 "patch": 0, 00:05:15.802 "suffix": "-pre", 00:05:15.802 "commit": "455fda465" 00:05:15.802 } 00:05:15.802 } 00:05:15.802 17:24:11 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:15.802 17:24:11 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:15.802 17:24:11 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:15.802 17:24:11 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:15.802 17:24:11 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:15.802 17:24:11 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.802 17:24:11 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:15.802 17:24:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:15.802 17:24:11 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:15.802 17:24:11 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.802 17:24:11 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:15.802 17:24:11 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:15.802 17:24:11 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:15.802 17:24:11 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:05:15.802 17:24:11 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:15.802 17:24:11 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:15.802 17:24:11 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:15.802 17:24:11 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:15.802 17:24:11 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:15.802 17:24:11 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:15.802 17:24:11 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:15.802 17:24:11 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:15.802 17:24:11 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:15.802 17:24:11 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:16.060 request: 00:05:16.060 { 00:05:16.060 "method": "env_dpdk_get_mem_stats", 00:05:16.060 "req_id": 1 00:05:16.060 } 00:05:16.060 Got JSON-RPC error response 00:05:16.060 response: 00:05:16.060 { 00:05:16.060 "code": -32601, 00:05:16.060 "message": "Method not found" 00:05:16.060 } 00:05:16.060 17:24:11 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:05:16.060 17:24:11 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:16.060 17:24:11 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:16.060 17:24:11 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:16.060 17:24:11 app_cmdline -- app/cmdline.sh@1 -- # killprocess 47538 00:05:16.060 17:24:11 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 47538 ']' 00:05:16.060 17:24:11 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 47538 00:05:16.060 17:24:11 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:05:16.060 17:24:11 app_cmdline -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:05:16.060 17:24:11 app_cmdline -- common/autotest_common.sh@956 -- # ps -c -o command 47538 00:05:16.060 17:24:11 app_cmdline -- common/autotest_common.sh@956 -- # tail -1 00:05:16.060 17:24:11 app_cmdline -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:05:16.060 17:24:11 app_cmdline -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:05:16.060 killing process with pid 47538 00:05:16.060 17:24:11 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 47538' 00:05:16.060 17:24:11 app_cmdline -- common/autotest_common.sh@967 -- # kill 47538 00:05:16.060 17:24:11 app_cmdline -- common/autotest_common.sh@972 -- # wait 47538 00:05:16.317 00:05:16.317 real 0m1.943s 00:05:16.317 user 0m2.261s 00:05:16.317 sys 0m0.749s 00:05:16.317 17:24:11 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.317 17:24:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:16.317 ************************************ 00:05:16.317 END TEST app_cmdline 00:05:16.317 ************************************ 00:05:16.317 17:24:12 -- common/autotest_common.sh@1142 -- # return 0 00:05:16.317 17:24:12 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:16.317 17:24:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.317 17:24:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.317 17:24:12 -- common/autotest_common.sh@10 -- # set +x 00:05:16.317 ************************************ 00:05:16.317 START TEST version 00:05:16.317 ************************************ 00:05:16.317 17:24:12 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:16.575 * Looking for test storage... 00:05:16.575 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:16.575 17:24:12 version -- app/version.sh@17 -- # get_header_version major 00:05:16.575 17:24:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:16.575 17:24:12 version -- app/version.sh@14 -- # cut -f2 00:05:16.575 17:24:12 version -- app/version.sh@14 -- # tr -d '"' 00:05:16.575 17:24:12 version -- app/version.sh@17 -- # major=24 00:05:16.575 17:24:12 version -- app/version.sh@18 -- # get_header_version minor 00:05:16.575 17:24:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:16.575 17:24:12 version -- app/version.sh@14 -- # cut -f2 00:05:16.575 17:24:12 version -- app/version.sh@14 -- # tr -d '"' 00:05:16.575 17:24:12 version -- app/version.sh@18 -- # minor=9 00:05:16.575 17:24:12 version -- app/version.sh@19 -- # get_header_version patch 00:05:16.575 17:24:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:16.575 17:24:12 version -- app/version.sh@14 -- # cut -f2 00:05:16.575 17:24:12 version -- app/version.sh@14 -- # tr -d '"' 00:05:16.575 17:24:12 version -- app/version.sh@19 -- # patch=0 00:05:16.575 17:24:12 version -- app/version.sh@20 -- # get_header_version suffix 00:05:16.575 17:24:12 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:16.575 17:24:12 version -- app/version.sh@14 -- # cut -f2 00:05:16.575 17:24:12 version -- app/version.sh@14 -- # tr -d '"' 00:05:16.575 17:24:12 version -- app/version.sh@20 -- # suffix=-pre 00:05:16.575 17:24:12 version -- app/version.sh@22 -- # version=24.9 00:05:16.575 17:24:12 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:16.575 17:24:12 version -- app/version.sh@28 -- # version=24.9rc0 00:05:16.575 17:24:12 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:16.575 17:24:12 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:16.575 17:24:12 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:16.575 17:24:12 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:16.575 00:05:16.575 real 0m0.216s 00:05:16.575 user 0m0.169s 00:05:16.575 sys 0m0.143s 00:05:16.575 17:24:12 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.575 17:24:12 version -- common/autotest_common.sh@10 -- # set +x 00:05:16.575 ************************************ 00:05:16.575 END TEST version 00:05:16.575 ************************************ 00:05:16.575 17:24:12 -- common/autotest_common.sh@1142 -- # return 0 00:05:16.575 17:24:12 -- spdk/autotest.sh@188 -- # '[' 1 -eq 1 ']' 00:05:16.575 17:24:12 -- spdk/autotest.sh@189 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:05:16.575 17:24:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.575 17:24:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.575 17:24:12 -- common/autotest_common.sh@10 -- # set +x 00:05:16.575 ************************************ 00:05:16.575 START TEST blockdev_general 00:05:16.575 ************************************ 00:05:16.575 17:24:12 blockdev_general -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:05:16.575 * Looking for test storage... 00:05:16.575 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:16.575 17:24:12 blockdev_general -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:16.575 17:24:12 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:05:16.575 17:24:12 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:16.575 17:24:12 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:16.575 17:24:12 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:16.575 17:24:12 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:16.575 17:24:12 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:16.575 17:24:12 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:16.575 17:24:12 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:05:16.575 17:24:12 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:05:16.575 17:24:12 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:05:16.575 17:24:12 blockdev_general -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:05:16.575 17:24:12 blockdev_general -- bdev/blockdev.sh@674 -- # uname -s 00:05:16.833 17:24:12 blockdev_general -- bdev/blockdev.sh@674 -- # '[' FreeBSD = Linux ']' 00:05:16.833 17:24:12 blockdev_general -- bdev/blockdev.sh@679 -- # PRE_RESERVED_MEM=2048 00:05:16.833 17:24:12 blockdev_general -- bdev/blockdev.sh@682 -- # test_type=bdev 00:05:16.833 17:24:12 blockdev_general -- bdev/blockdev.sh@683 -- # crypto_device= 00:05:16.833 17:24:12 blockdev_general -- bdev/blockdev.sh@684 -- # dek= 00:05:16.833 17:24:12 blockdev_general -- bdev/blockdev.sh@685 -- # env_ctx= 00:05:16.833 17:24:12 blockdev_general -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:05:16.833 17:24:12 blockdev_general -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:05:16.833 17:24:12 blockdev_general -- bdev/blockdev.sh@690 -- # [[ bdev == bdev ]] 00:05:16.833 17:24:12 blockdev_general -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:05:16.833 17:24:12 blockdev_general -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:05:16.833 17:24:12 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=47677 00:05:16.833 17:24:12 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:16.833 17:24:12 blockdev_general -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:05:16.833 17:24:12 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 47677 00:05:16.833 17:24:12 blockdev_general -- common/autotest_common.sh@829 -- # '[' -z 47677 ']' 00:05:16.833 17:24:12 blockdev_general -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.833 17:24:12 blockdev_general -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.833 17:24:12 blockdev_general -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.833 17:24:12 blockdev_general -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.833 17:24:12 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:16.833 [2024-07-15 17:24:12.415025] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:05:16.833 [2024-07-15 17:24:12.415306] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:17.397 EAL: TSC is not safe to use in SMP mode 00:05:17.397 EAL: TSC is not invariant 00:05:17.397 [2024-07-15 17:24:12.936146] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.397 [2024-07-15 17:24:13.033038] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:17.397 [2024-07-15 17:24:13.035334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.654 17:24:13 blockdev_general -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.654 17:24:13 blockdev_general -- common/autotest_common.sh@862 -- # return 0 00:05:17.654 17:24:13 blockdev_general -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:05:17.654 17:24:13 blockdev_general -- bdev/blockdev.sh@696 -- # setup_bdev_conf 00:05:17.654 17:24:13 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:05:17.654 17:24:13 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.654 17:24:13 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:17.911 [2024-07-15 17:24:13.508338] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:17.911 [2024-07-15 17:24:13.508388] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:17.911 00:05:17.911 [2024-07-15 17:24:13.516332] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:17.911 [2024-07-15 17:24:13.516354] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:17.911 00:05:17.911 Malloc0 00:05:17.911 Malloc1 00:05:17.911 Malloc2 00:05:17.911 Malloc3 00:05:17.911 Malloc4 00:05:17.911 Malloc5 00:05:17.911 Malloc6 00:05:17.911 Malloc7 00:05:17.911 Malloc8 00:05:17.911 Malloc9 00:05:17.912 [2024-07-15 17:24:13.604339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:17.912 [2024-07-15 17:24:13.604371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:17.912 [2024-07-15 17:24:13.604387] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xab30c43a980 00:05:17.912 [2024-07-15 17:24:13.604396] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:17.912 [2024-07-15 17:24:13.604744] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:17.912 [2024-07-15 17:24:13.604770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:17.912 TestPT 00:05:17.912 17:24:13 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.912 17:24:13 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:05:17.912 5000+0 records in 00:05:17.912 5000+0 records out 00:05:17.912 10240000 bytes transferred in 0.026244 secs (390182193 bytes/sec) 00:05:17.912 17:24:13 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:05:17.912 17:24:13 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.912 17:24:13 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:17.912 AIO0 00:05:17.912 17:24:13 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.912 17:24:13 blockdev_general -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:05:17.912 17:24:13 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.912 17:24:13 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:17.912 17:24:13 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.912 17:24:13 blockdev_general -- bdev/blockdev.sh@740 -- # cat 00:05:17.912 17:24:13 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:05:17.912 17:24:13 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.912 17:24:13 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:17.912 17:24:13 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.912 17:24:13 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:05:17.912 17:24:13 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.912 17:24:13 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:18.222 17:24:13 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.222 17:24:13 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:05:18.222 17:24:13 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.222 17:24:13 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:18.222 17:24:13 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.222 17:24:13 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:05:18.222 17:24:13 blockdev_general -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:05:18.222 17:24:13 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:05:18.222 17:24:13 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:18.222 17:24:13 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:18.222 17:24:13 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:18.222 17:24:13 blockdev_general -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:05:18.222 17:24:13 blockdev_general -- bdev/blockdev.sh@749 -- # jq -r .name 00:05:18.224 17:24:13 blockdev_general -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "0e2c9f0b-42cf-11ef-96ac-773515fba644"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "0e2c9f0b-42cf-11ef-96ac-773515fba644",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "69a37e5f-1f50-c75c-aeb8-4d082136749e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "69a37e5f-1f50-c75c-aeb8-4d082136749e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "2d6c186d-cd4f-7e5a-9799-7a3b1a495d86"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "2d6c186d-cd4f-7e5a-9799-7a3b1a495d86",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "3e0107c8-8794-6f50-9c31-02e2412d6e93"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3e0107c8-8794-6f50-9c31-02e2412d6e93",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "07756ac1-39b6-0358-8acf-c192a04ad038"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "07756ac1-39b6-0358-8acf-c192a04ad038",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "fd4d53e7-1cc6-fc52-99e1-f0f77c8adc29"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "fd4d53e7-1cc6-fc52-99e1-f0f77c8adc29",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "2395886e-90e0-d652-b0f1-98bb4d75905c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2395886e-90e0-d652-b0f1-98bb4d75905c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "f1d9c1be-2cb0-d553-adc7-ad473ea7c421"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f1d9c1be-2cb0-d553-adc7-ad473ea7c421",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "8dc10afb-e9f9-715d-ad42-11aed0cd08b5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8dc10afb-e9f9-715d-ad42-11aed0cd08b5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "d33fe42a-3c4d-1257-af59-2c46b4b54e24"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d33fe42a-3c4d-1257-af59-2c46b4b54e24",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "1de999e9-eca3-ce54-97e2-d681edd87219"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1de999e9-eca3-ce54-97e2-d681edd87219",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "a7e37591-206d-2857-95d5-d18ce9178f35"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a7e37591-206d-2857-95d5-d18ce9178f35",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "0e3a18ec-42cf-11ef-96ac-773515fba644"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "0e3a18ec-42cf-11ef-96ac-773515fba644",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "0e3a18ec-42cf-11ef-96ac-773515fba644",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "0e31808f-42cf-11ef-96ac-773515fba644",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "0e32b908-42cf-11ef-96ac-773515fba644",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "0e3b4563-42cf-11ef-96ac-773515fba644"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "0e3b4563-42cf-11ef-96ac-773515fba644",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "0e3b4563-42cf-11ef-96ac-773515fba644",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "0e33f18a-42cf-11ef-96ac-773515fba644",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "0e352a0b-42cf-11ef-96ac-773515fba644",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "0e3c7dd7-42cf-11ef-96ac-773515fba644"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "0e3c7dd7-42cf-11ef-96ac-773515fba644",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "0e3c7dd7-42cf-11ef-96ac-773515fba644",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "0e366286-42cf-11ef-96ac-773515fba644",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "0e379b09-42cf-11ef-96ac-773515fba644",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "0e4509e0-42cf-11ef-96ac-773515fba644"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "0e4509e0-42cf-11ef-96ac-773515fba644",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:05:18.224 17:24:13 blockdev_general -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:05:18.224 17:24:13 blockdev_general -- bdev/blockdev.sh@752 -- # hello_world_bdev=Malloc0 00:05:18.224 17:24:13 blockdev_general -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:05:18.224 17:24:13 blockdev_general -- bdev/blockdev.sh@754 -- # killprocess 47677 00:05:18.224 17:24:13 blockdev_general -- common/autotest_common.sh@948 -- # '[' -z 47677 ']' 00:05:18.224 17:24:13 blockdev_general -- common/autotest_common.sh@952 -- # kill -0 47677 00:05:18.224 17:24:13 blockdev_general -- common/autotest_common.sh@953 -- # uname 00:05:18.224 17:24:13 blockdev_general -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:05:18.224 17:24:13 blockdev_general -- common/autotest_common.sh@956 -- # ps -c -o command 47677 00:05:18.224 17:24:13 blockdev_general -- common/autotest_common.sh@956 -- # tail -1 00:05:18.224 17:24:13 blockdev_general -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:05:18.224 17:24:13 blockdev_general -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:05:18.224 killing process with pid 47677 00:05:18.224 17:24:13 blockdev_general -- common/autotest_common.sh@966 -- # echo 'killing process with pid 47677' 00:05:18.224 17:24:13 blockdev_general -- common/autotest_common.sh@967 -- # kill 47677 00:05:18.224 17:24:13 blockdev_general -- common/autotest_common.sh@972 -- # wait 47677 00:05:18.482 17:24:14 blockdev_general -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:18.482 17:24:14 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:05:18.482 17:24:14 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:18.482 17:24:14 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.482 17:24:14 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:18.482 ************************************ 00:05:18.482 START TEST bdev_hello_world 00:05:18.482 ************************************ 00:05:18.739 17:24:14 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:05:18.739 [2024-07-15 17:24:14.318738] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:05:18.740 [2024-07-15 17:24:14.318921] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:19.305 EAL: TSC is not safe to use in SMP mode 00:05:19.305 EAL: TSC is not invariant 00:05:19.305 [2024-07-15 17:24:14.851443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.305 [2024-07-15 17:24:14.938442] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:19.305 [2024-07-15 17:24:14.940634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.305 [2024-07-15 17:24:14.999302] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:19.305 [2024-07-15 17:24:14.999341] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:19.305 [2024-07-15 17:24:15.007288] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:19.305 [2024-07-15 17:24:15.007313] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:19.305 [2024-07-15 17:24:15.015304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:19.305 [2024-07-15 17:24:15.015332] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:05:19.305 [2024-07-15 17:24:15.015341] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:05:19.305 [2024-07-15 17:24:15.063312] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:19.305 [2024-07-15 17:24:15.063359] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:19.305 [2024-07-15 17:24:15.063370] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x165087436800 00:05:19.305 [2024-07-15 17:24:15.063378] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:19.305 [2024-07-15 17:24:15.063736] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:19.305 [2024-07-15 17:24:15.063763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:19.563 [2024-07-15 17:24:15.163422] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:05:19.563 [2024-07-15 17:24:15.163478] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:05:19.563 [2024-07-15 17:24:15.163491] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:05:19.563 [2024-07-15 17:24:15.163505] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:05:19.563 [2024-07-15 17:24:15.163519] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:05:19.563 [2024-07-15 17:24:15.163527] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:05:19.563 [2024-07-15 17:24:15.163538] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:05:19.563 00:05:19.563 [2024-07-15 17:24:15.163547] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:05:19.822 00:05:19.822 real 0m1.087s 00:05:19.822 user 0m0.504s 00:05:19.822 sys 0m0.582s 00:05:19.822 17:24:15 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.822 17:24:15 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:05:19.822 ************************************ 00:05:19.822 END TEST bdev_hello_world 00:05:19.822 ************************************ 00:05:19.822 17:24:15 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:05:19.822 17:24:15 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:05:19.822 17:24:15 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:19.822 17:24:15 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.822 17:24:15 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:19.822 ************************************ 00:05:19.822 START TEST bdev_bounds 00:05:19.822 ************************************ 00:05:19.822 17:24:15 blockdev_general.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:05:19.822 17:24:15 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=47729 00:05:19.822 17:24:15 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.822 Process bdevio pid: 47729 00:05:19.822 17:24:15 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 47729' 00:05:19.822 17:24:15 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 2048 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:19.822 17:24:15 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 47729 00:05:19.822 17:24:15 blockdev_general.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 47729 ']' 00:05:19.822 17:24:15 blockdev_general.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.822 17:24:15 blockdev_general.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:19.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.822 17:24:15 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.822 17:24:15 blockdev_general.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:19.822 17:24:15 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:19.822 [2024-07-15 17:24:15.448455] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:05:19.822 [2024-07-15 17:24:15.448671] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 2048 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:20.388 EAL: TSC is not safe to use in SMP mode 00:05:20.388 EAL: TSC is not invariant 00:05:20.388 [2024-07-15 17:24:15.995866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:20.388 [2024-07-15 17:24:16.081107] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:20.388 [2024-07-15 17:24:16.081182] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:05:20.388 [2024-07-15 17:24:16.081198] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:05:20.388 [2024-07-15 17:24:16.084948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.388 [2024-07-15 17:24:16.084841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.388 [2024-07-15 17:24:16.084942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:20.388 [2024-07-15 17:24:16.143708] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:20.388 [2024-07-15 17:24:16.143775] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:20.388 [2024-07-15 17:24:16.151691] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:20.388 [2024-07-15 17:24:16.151744] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:20.388 [2024-07-15 17:24:16.159716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:20.388 [2024-07-15 17:24:16.159776] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:05:20.388 [2024-07-15 17:24:16.159790] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:05:20.388 [2024-07-15 17:24:16.207713] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:20.388 [2024-07-15 17:24:16.207771] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:20.388 [2024-07-15 17:24:16.207783] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe9d3ce36800 00:05:20.388 [2024-07-15 17:24:16.207791] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:20.388 [2024-07-15 17:24:16.208159] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:20.388 [2024-07-15 17:24:16.208180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:20.970 17:24:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:20.970 17:24:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:05:20.970 17:24:16 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:05:20.970 I/O targets: 00:05:20.970 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:05:20.970 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:05:20.970 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:05:20.971 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:05:20.971 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:05:20.971 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:05:20.971 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:05:20.971 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:05:20.971 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:05:20.971 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:05:20.971 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:05:20.971 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:05:20.971 raid0: 131072 blocks of 512 bytes (64 MiB) 00:05:20.971 concat0: 131072 blocks of 512 bytes (64 MiB) 00:05:20.971 raid1: 65536 blocks of 512 bytes (32 MiB) 00:05:20.971 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:05:20.971 00:05:20.971 00:05:20.971 CUnit - A unit testing framework for C - Version 2.1-3 00:05:20.971 http://cunit.sourceforge.net/ 00:05:20.971 00:05:20.971 00:05:20.971 Suite: bdevio tests on: AIO0 00:05:20.971 Test: blockdev write read block ...passed 00:05:20.971 Test: blockdev write zeroes read block ...passed 00:05:20.971 Test: blockdev write zeroes read no split ...passed 00:05:20.971 Test: blockdev write zeroes read split ...passed 00:05:20.971 Test: blockdev write zeroes read split partial ...passed 00:05:20.971 Test: blockdev reset ...passed 00:05:20.971 Test: blockdev write read 8 blocks ...passed 00:05:20.971 Test: blockdev write read size > 128k ...passed 00:05:20.971 Test: blockdev write read invalid size ...passed 00:05:20.971 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:20.971 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:20.971 Test: blockdev write read max offset ...passed 00:05:20.971 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:20.971 Test: blockdev writev readv 8 blocks ...passed 00:05:20.971 Test: blockdev writev readv 30 x 1block ...passed 00:05:20.971 Test: blockdev writev readv block ...passed 00:05:20.971 Test: blockdev writev readv size > 128k ...passed 00:05:20.971 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:20.971 Test: blockdev comparev and writev ...passed 00:05:20.971 Test: blockdev nvme passthru rw ...passed 00:05:20.971 Test: blockdev nvme passthru vendor specific ...passed 00:05:20.971 Test: blockdev nvme admin passthru ...passed 00:05:20.971 Test: blockdev copy ...passed 00:05:20.971 Suite: bdevio tests on: raid1 00:05:20.971 Test: blockdev write read block ...passed 00:05:20.971 Test: blockdev write zeroes read block ...passed 00:05:20.971 Test: blockdev write zeroes read no split ...passed 00:05:20.971 Test: blockdev write zeroes read split ...passed 00:05:20.971 Test: blockdev write zeroes read split partial ...passed 00:05:20.971 Test: blockdev reset ...passed 00:05:20.971 Test: blockdev write read 8 blocks ...passed 00:05:20.971 Test: blockdev write read size > 128k ...passed 00:05:20.971 Test: blockdev write read invalid size ...passed 00:05:20.971 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:20.971 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:20.971 Test: blockdev write read max offset ...passed 00:05:20.971 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:20.971 Test: blockdev writev readv 8 blocks ...passed 00:05:20.971 Test: blockdev writev readv 30 x 1block ...passed 00:05:20.971 Test: blockdev writev readv block ...passed 00:05:20.971 Test: blockdev writev readv size > 128k ...passed 00:05:20.971 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:20.971 Test: blockdev comparev and writev ...passed 00:05:20.971 Test: blockdev nvme passthru rw ...passed 00:05:20.971 Test: blockdev nvme passthru vendor specific ...passed 00:05:20.971 Test: blockdev nvme admin passthru ...passed 00:05:20.971 Test: blockdev copy ...passed 00:05:20.971 Suite: bdevio tests on: concat0 00:05:20.971 Test: blockdev write read block ...passed 00:05:20.971 Test: blockdev write zeroes read block ...passed 00:05:20.971 Test: blockdev write zeroes read no split ...passed 00:05:20.971 Test: blockdev write zeroes read split ...passed 00:05:20.971 Test: blockdev write zeroes read split partial ...passed 00:05:20.971 Test: blockdev reset ...passed 00:05:20.971 Test: blockdev write read 8 blocks ...passed 00:05:20.971 Test: blockdev write read size > 128k ...passed 00:05:20.971 Test: blockdev write read invalid size ...passed 00:05:20.971 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:20.971 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:20.971 Test: blockdev write read max offset ...passed 00:05:20.971 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:20.971 Test: blockdev writev readv 8 blocks ...passed 00:05:20.971 Test: blockdev writev readv 30 x 1block ...passed 00:05:20.971 Test: blockdev writev readv block ...passed 00:05:20.971 Test: blockdev writev readv size > 128k ...passed 00:05:20.971 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:20.971 Test: blockdev comparev and writev ...passed 00:05:20.971 Test: blockdev nvme passthru rw ...passed 00:05:20.971 Test: blockdev nvme passthru vendor specific ...passed 00:05:20.971 Test: blockdev nvme admin passthru ...passed 00:05:20.971 Test: blockdev copy ...passed 00:05:20.971 Suite: bdevio tests on: raid0 00:05:20.971 Test: blockdev write read block ...passed 00:05:20.971 Test: blockdev write zeroes read block ...passed 00:05:20.971 Test: blockdev write zeroes read no split ...passed 00:05:20.971 Test: blockdev write zeroes read split ...passed 00:05:20.971 Test: blockdev write zeroes read split partial ...passed 00:05:20.971 Test: blockdev reset ...passed 00:05:20.971 Test: blockdev write read 8 blocks ...passed 00:05:20.971 Test: blockdev write read size > 128k ...passed 00:05:20.971 Test: blockdev write read invalid size ...passed 00:05:20.971 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:20.971 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:20.971 Test: blockdev write read max offset ...passed 00:05:20.971 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:20.971 Test: blockdev writev readv 8 blocks ...passed 00:05:20.971 Test: blockdev writev readv 30 x 1block ...passed 00:05:20.971 Test: blockdev writev readv block ...passed 00:05:20.971 Test: blockdev writev readv size > 128k ...passed 00:05:20.971 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:20.971 Test: blockdev comparev and writev ...passed 00:05:20.971 Test: blockdev nvme passthru rw ...passed 00:05:20.971 Test: blockdev nvme passthru vendor specific ...passed 00:05:20.971 Test: blockdev nvme admin passthru ...passed 00:05:20.971 Test: blockdev copy ...passed 00:05:20.971 Suite: bdevio tests on: TestPT 00:05:20.971 Test: blockdev write read block ...passed 00:05:20.971 Test: blockdev write zeroes read block ...passed 00:05:20.971 Test: blockdev write zeroes read no split ...passed 00:05:20.971 Test: blockdev write zeroes read split ...passed 00:05:20.971 Test: blockdev write zeroes read split partial ...passed 00:05:20.971 Test: blockdev reset ...passed 00:05:20.971 Test: blockdev write read 8 blocks ...passed 00:05:20.971 Test: blockdev write read size > 128k ...passed 00:05:20.971 Test: blockdev write read invalid size ...passed 00:05:20.971 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:20.971 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:20.971 Test: blockdev write read max offset ...passed 00:05:20.971 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:20.971 Test: blockdev writev readv 8 blocks ...passed 00:05:20.971 Test: blockdev writev readv 30 x 1block ...passed 00:05:20.971 Test: blockdev writev readv block ...passed 00:05:21.230 Test: blockdev writev readv size > 128k ...passed 00:05:21.230 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:21.230 Test: blockdev comparev and writev ...passed 00:05:21.230 Test: blockdev nvme passthru rw ...passed 00:05:21.230 Test: blockdev nvme passthru vendor specific ...passed 00:05:21.230 Test: blockdev nvme admin passthru ...passed 00:05:21.230 Test: blockdev copy ...passed 00:05:21.230 Suite: bdevio tests on: Malloc2p7 00:05:21.230 Test: blockdev write read block ...passed 00:05:21.230 Test: blockdev write zeroes read block ...passed 00:05:21.230 Test: blockdev write zeroes read no split ...passed 00:05:21.230 Test: blockdev write zeroes read split ...passed 00:05:21.230 Test: blockdev write zeroes read split partial ...passed 00:05:21.230 Test: blockdev reset ...passed 00:05:21.230 Test: blockdev write read 8 blocks ...passed 00:05:21.230 Test: blockdev write read size > 128k ...passed 00:05:21.230 Test: blockdev write read invalid size ...passed 00:05:21.230 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:21.230 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:21.230 Test: blockdev write read max offset ...passed 00:05:21.230 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:21.230 Test: blockdev writev readv 8 blocks ...passed 00:05:21.230 Test: blockdev writev readv 30 x 1block ...passed 00:05:21.230 Test: blockdev writev readv block ...passed 00:05:21.230 Test: blockdev writev readv size > 128k ...passed 00:05:21.230 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:21.230 Test: blockdev comparev and writev ...passed 00:05:21.230 Test: blockdev nvme passthru rw ...passed 00:05:21.230 Test: blockdev nvme passthru vendor specific ...passed 00:05:21.230 Test: blockdev nvme admin passthru ...passed 00:05:21.230 Test: blockdev copy ...passed 00:05:21.230 Suite: bdevio tests on: Malloc2p6 00:05:21.230 Test: blockdev write read block ...passed 00:05:21.230 Test: blockdev write zeroes read block ...passed 00:05:21.230 Test: blockdev write zeroes read no split ...passed 00:05:21.230 Test: blockdev write zeroes read split ...passed 00:05:21.230 Test: blockdev write zeroes read split partial ...passed 00:05:21.230 Test: blockdev reset ...passed 00:05:21.230 Test: blockdev write read 8 blocks ...passed 00:05:21.230 Test: blockdev write read size > 128k ...passed 00:05:21.230 Test: blockdev write read invalid size ...passed 00:05:21.230 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:21.230 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:21.230 Test: blockdev write read max offset ...passed 00:05:21.230 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:21.230 Test: blockdev writev readv 8 blocks ...passed 00:05:21.230 Test: blockdev writev readv 30 x 1block ...passed 00:05:21.230 Test: blockdev writev readv block ...passed 00:05:21.230 Test: blockdev writev readv size > 128k ...passed 00:05:21.230 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:21.230 Test: blockdev comparev and writev ...passed 00:05:21.230 Test: blockdev nvme passthru rw ...passed 00:05:21.230 Test: blockdev nvme passthru vendor specific ...passed 00:05:21.230 Test: blockdev nvme admin passthru ...passed 00:05:21.230 Test: blockdev copy ...passed 00:05:21.230 Suite: bdevio tests on: Malloc2p5 00:05:21.230 Test: blockdev write read block ...passed 00:05:21.230 Test: blockdev write zeroes read block ...passed 00:05:21.230 Test: blockdev write zeroes read no split ...passed 00:05:21.230 Test: blockdev write zeroes read split ...passed 00:05:21.230 Test: blockdev write zeroes read split partial ...passed 00:05:21.230 Test: blockdev reset ...passed 00:05:21.230 Test: blockdev write read 8 blocks ...passed 00:05:21.230 Test: blockdev write read size > 128k ...passed 00:05:21.230 Test: blockdev write read invalid size ...passed 00:05:21.230 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:21.230 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:21.230 Test: blockdev write read max offset ...passed 00:05:21.230 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:21.230 Test: blockdev writev readv 8 blocks ...passed 00:05:21.230 Test: blockdev writev readv 30 x 1block ...passed 00:05:21.230 Test: blockdev writev readv block ...passed 00:05:21.230 Test: blockdev writev readv size > 128k ...passed 00:05:21.230 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:21.230 Test: blockdev comparev and writev ...passed 00:05:21.230 Test: blockdev nvme passthru rw ...passed 00:05:21.230 Test: blockdev nvme passthru vendor specific ...passed 00:05:21.230 Test: blockdev nvme admin passthru ...passed 00:05:21.230 Test: blockdev copy ...passed 00:05:21.230 Suite: bdevio tests on: Malloc2p4 00:05:21.230 Test: blockdev write read block ...passed 00:05:21.230 Test: blockdev write zeroes read block ...passed 00:05:21.230 Test: blockdev write zeroes read no split ...passed 00:05:21.230 Test: blockdev write zeroes read split ...passed 00:05:21.230 Test: blockdev write zeroes read split partial ...passed 00:05:21.230 Test: blockdev reset ...passed 00:05:21.230 Test: blockdev write read 8 blocks ...passed 00:05:21.230 Test: blockdev write read size > 128k ...passed 00:05:21.230 Test: blockdev write read invalid size ...passed 00:05:21.230 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:21.230 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:21.230 Test: blockdev write read max offset ...passed 00:05:21.230 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:21.230 Test: blockdev writev readv 8 blocks ...passed 00:05:21.230 Test: blockdev writev readv 30 x 1block ...passed 00:05:21.230 Test: blockdev writev readv block ...passed 00:05:21.230 Test: blockdev writev readv size > 128k ...passed 00:05:21.230 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:21.230 Test: blockdev comparev and writev ...passed 00:05:21.230 Test: blockdev nvme passthru rw ...passed 00:05:21.230 Test: blockdev nvme passthru vendor specific ...passed 00:05:21.230 Test: blockdev nvme admin passthru ...passed 00:05:21.230 Test: blockdev copy ...passed 00:05:21.230 Suite: bdevio tests on: Malloc2p3 00:05:21.230 Test: blockdev write read block ...passed 00:05:21.230 Test: blockdev write zeroes read block ...passed 00:05:21.230 Test: blockdev write zeroes read no split ...passed 00:05:21.230 Test: blockdev write zeroes read split ...passed 00:05:21.230 Test: blockdev write zeroes read split partial ...passed 00:05:21.230 Test: blockdev reset ...passed 00:05:21.230 Test: blockdev write read 8 blocks ...passed 00:05:21.230 Test: blockdev write read size > 128k ...passed 00:05:21.230 Test: blockdev write read invalid size ...passed 00:05:21.230 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:21.230 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:21.230 Test: blockdev write read max offset ...passed 00:05:21.230 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:21.230 Test: blockdev writev readv 8 blocks ...passed 00:05:21.230 Test: blockdev writev readv 30 x 1block ...passed 00:05:21.230 Test: blockdev writev readv block ...passed 00:05:21.230 Test: blockdev writev readv size > 128k ...passed 00:05:21.230 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:21.230 Test: blockdev comparev and writev ...passed 00:05:21.230 Test: blockdev nvme passthru rw ...passed 00:05:21.230 Test: blockdev nvme passthru vendor specific ...passed 00:05:21.230 Test: blockdev nvme admin passthru ...passed 00:05:21.230 Test: blockdev copy ...passed 00:05:21.230 Suite: bdevio tests on: Malloc2p2 00:05:21.230 Test: blockdev write read block ...passed 00:05:21.230 Test: blockdev write zeroes read block ...passed 00:05:21.230 Test: blockdev write zeroes read no split ...passed 00:05:21.230 Test: blockdev write zeroes read split ...passed 00:05:21.230 Test: blockdev write zeroes read split partial ...passed 00:05:21.230 Test: blockdev reset ...passed 00:05:21.230 Test: blockdev write read 8 blocks ...passed 00:05:21.230 Test: blockdev write read size > 128k ...passed 00:05:21.230 Test: blockdev write read invalid size ...passed 00:05:21.230 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:21.230 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:21.230 Test: blockdev write read max offset ...passed 00:05:21.230 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:21.230 Test: blockdev writev readv 8 blocks ...passed 00:05:21.230 Test: blockdev writev readv 30 x 1block ...passed 00:05:21.230 Test: blockdev writev readv block ...passed 00:05:21.230 Test: blockdev writev readv size > 128k ...passed 00:05:21.230 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:21.230 Test: blockdev comparev and writev ...passed 00:05:21.230 Test: blockdev nvme passthru rw ...passed 00:05:21.230 Test: blockdev nvme passthru vendor specific ...passed 00:05:21.230 Test: blockdev nvme admin passthru ...passed 00:05:21.230 Test: blockdev copy ...passed 00:05:21.230 Suite: bdevio tests on: Malloc2p1 00:05:21.230 Test: blockdev write read block ...passed 00:05:21.230 Test: blockdev write zeroes read block ...passed 00:05:21.230 Test: blockdev write zeroes read no split ...passed 00:05:21.230 Test: blockdev write zeroes read split ...passed 00:05:21.230 Test: blockdev write zeroes read split partial ...passed 00:05:21.230 Test: blockdev reset ...passed 00:05:21.230 Test: blockdev write read 8 blocks ...passed 00:05:21.230 Test: blockdev write read size > 128k ...passed 00:05:21.230 Test: blockdev write read invalid size ...passed 00:05:21.230 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:21.230 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:21.230 Test: blockdev write read max offset ...passed 00:05:21.230 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:21.230 Test: blockdev writev readv 8 blocks ...passed 00:05:21.230 Test: blockdev writev readv 30 x 1block ...passed 00:05:21.230 Test: blockdev writev readv block ...passed 00:05:21.230 Test: blockdev writev readv size > 128k ...passed 00:05:21.230 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:21.230 Test: blockdev comparev and writev ...passed 00:05:21.230 Test: blockdev nvme passthru rw ...passed 00:05:21.230 Test: blockdev nvme passthru vendor specific ...passed 00:05:21.230 Test: blockdev nvme admin passthru ...passed 00:05:21.230 Test: blockdev copy ...passed 00:05:21.230 Suite: bdevio tests on: Malloc2p0 00:05:21.230 Test: blockdev write read block ...passed 00:05:21.230 Test: blockdev write zeroes read block ...passed 00:05:21.230 Test: blockdev write zeroes read no split ...passed 00:05:21.230 Test: blockdev write zeroes read split ...passed 00:05:21.230 Test: blockdev write zeroes read split partial ...passed 00:05:21.230 Test: blockdev reset ...passed 00:05:21.230 Test: blockdev write read 8 blocks ...passed 00:05:21.230 Test: blockdev write read size > 128k ...passed 00:05:21.230 Test: blockdev write read invalid size ...passed 00:05:21.230 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:21.230 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:21.230 Test: blockdev write read max offset ...passed 00:05:21.230 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:21.230 Test: blockdev writev readv 8 blocks ...passed 00:05:21.230 Test: blockdev writev readv 30 x 1block ...passed 00:05:21.230 Test: blockdev writev readv block ...passed 00:05:21.230 Test: blockdev writev readv size > 128k ...passed 00:05:21.230 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:21.230 Test: blockdev comparev and writev ...passed 00:05:21.230 Test: blockdev nvme passthru rw ...passed 00:05:21.230 Test: blockdev nvme passthru vendor specific ...passed 00:05:21.230 Test: blockdev nvme admin passthru ...passed 00:05:21.230 Test: blockdev copy ...passed 00:05:21.230 Suite: bdevio tests on: Malloc1p1 00:05:21.230 Test: blockdev write read block ...passed 00:05:21.230 Test: blockdev write zeroes read block ...passed 00:05:21.230 Test: blockdev write zeroes read no split ...passed 00:05:21.230 Test: blockdev write zeroes read split ...passed 00:05:21.230 Test: blockdev write zeroes read split partial ...passed 00:05:21.230 Test: blockdev reset ...passed 00:05:21.230 Test: blockdev write read 8 blocks ...passed 00:05:21.230 Test: blockdev write read size > 128k ...passed 00:05:21.230 Test: blockdev write read invalid size ...passed 00:05:21.230 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:21.230 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:21.230 Test: blockdev write read max offset ...passed 00:05:21.230 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:21.231 Test: blockdev writev readv 8 blocks ...passed 00:05:21.231 Test: blockdev writev readv 30 x 1block ...passed 00:05:21.231 Test: blockdev writev readv block ...passed 00:05:21.231 Test: blockdev writev readv size > 128k ...passed 00:05:21.231 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:21.231 Test: blockdev comparev and writev ...passed 00:05:21.231 Test: blockdev nvme passthru rw ...passed 00:05:21.231 Test: blockdev nvme passthru vendor specific ...passed 00:05:21.231 Test: blockdev nvme admin passthru ...passed 00:05:21.231 Test: blockdev copy ...passed 00:05:21.231 Suite: bdevio tests on: Malloc1p0 00:05:21.231 Test: blockdev write read block ...passed 00:05:21.231 Test: blockdev write zeroes read block ...passed 00:05:21.231 Test: blockdev write zeroes read no split ...passed 00:05:21.231 Test: blockdev write zeroes read split ...passed 00:05:21.231 Test: blockdev write zeroes read split partial ...passed 00:05:21.231 Test: blockdev reset ...passed 00:05:21.231 Test: blockdev write read 8 blocks ...passed 00:05:21.231 Test: blockdev write read size > 128k ...passed 00:05:21.231 Test: blockdev write read invalid size ...passed 00:05:21.231 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:21.231 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:21.231 Test: blockdev write read max offset ...passed 00:05:21.231 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:21.231 Test: blockdev writev readv 8 blocks ...passed 00:05:21.231 Test: blockdev writev readv 30 x 1block ...passed 00:05:21.231 Test: blockdev writev readv block ...passed 00:05:21.231 Test: blockdev writev readv size > 128k ...passed 00:05:21.231 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:21.231 Test: blockdev comparev and writev ...passed 00:05:21.231 Test: blockdev nvme passthru rw ...passed 00:05:21.231 Test: blockdev nvme passthru vendor specific ...passed 00:05:21.231 Test: blockdev nvme admin passthru ...passed 00:05:21.231 Test: blockdev copy ...passed 00:05:21.231 Suite: bdevio tests on: Malloc0 00:05:21.231 Test: blockdev write read block ...passed 00:05:21.231 Test: blockdev write zeroes read block ...passed 00:05:21.231 Test: blockdev write zeroes read no split ...passed 00:05:21.231 Test: blockdev write zeroes read split ...passed 00:05:21.231 Test: blockdev write zeroes read split partial ...passed 00:05:21.231 Test: blockdev reset ...passed 00:05:21.231 Test: blockdev write read 8 blocks ...passed 00:05:21.231 Test: blockdev write read size > 128k ...passed 00:05:21.231 Test: blockdev write read invalid size ...passed 00:05:21.231 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:21.231 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:21.231 Test: blockdev write read max offset ...passed 00:05:21.231 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:21.231 Test: blockdev writev readv 8 blocks ...passed 00:05:21.231 Test: blockdev writev readv 30 x 1block ...passed 00:05:21.231 Test: blockdev writev readv block ...passed 00:05:21.231 Test: blockdev writev readv size > 128k ...passed 00:05:21.231 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:21.231 Test: blockdev comparev and writev ...passed 00:05:21.231 Test: blockdev nvme passthru rw ...passed 00:05:21.231 Test: blockdev nvme passthru vendor specific ...passed 00:05:21.231 Test: blockdev nvme admin passthru ...passed 00:05:21.231 Test: blockdev copy ...passed 00:05:21.231 00:05:21.231 Run Summary: Type Total Ran Passed Failed Inactive 00:05:21.231 suites 16 16 n/a 0 0 00:05:21.231 tests 368 368 368 0 0 00:05:21.231 asserts 2224 2224 2224 0 n/a 00:05:21.231 00:05:21.231 Elapsed time = 0.516 seconds 00:05:21.231 0 00:05:21.231 17:24:16 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 47729 00:05:21.231 17:24:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 47729 ']' 00:05:21.231 17:24:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 47729 00:05:21.231 17:24:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:05:21.231 17:24:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:05:21.231 17:24:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # ps -c -o command 47729 00:05:21.231 17:24:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # tail -1 00:05:21.231 17:24:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=bdevio 00:05:21.231 killing process with pid 47729 00:05:21.231 17:24:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@958 -- # '[' bdevio = sudo ']' 00:05:21.231 17:24:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 47729' 00:05:21.231 17:24:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@967 -- # kill 47729 00:05:21.231 17:24:16 blockdev_general.bdev_bounds -- common/autotest_common.sh@972 -- # wait 47729 00:05:21.490 17:24:17 blockdev_general.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:05:21.490 00:05:21.490 real 0m1.695s 00:05:21.490 user 0m3.400s 00:05:21.490 sys 0m0.712s 00:05:21.490 ************************************ 00:05:21.490 END TEST bdev_bounds 00:05:21.490 17:24:17 blockdev_general.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.490 17:24:17 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:21.490 ************************************ 00:05:21.490 17:24:17 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:05:21.490 17:24:17 blockdev_general -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:05:21.490 17:24:17 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:21.490 17:24:17 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.490 17:24:17 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:21.490 ************************************ 00:05:21.490 START TEST bdev_nbd 00:05:21.490 ************************************ 00:05:21.490 17:24:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:05:21.490 17:24:17 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:05:21.490 17:24:17 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ FreeBSD == Linux ]] 00:05:21.490 17:24:17 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # return 0 00:05:21.490 00:05:21.490 real 0m0.004s 00:05:21.490 user 0m0.001s 00:05:21.490 sys 0m0.007s 00:05:21.490 17:24:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.490 ************************************ 00:05:21.490 17:24:17 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:21.490 END TEST bdev_nbd 00:05:21.490 ************************************ 00:05:21.490 17:24:17 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:05:21.490 17:24:17 blockdev_general -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:05:21.490 17:24:17 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = nvme ']' 00:05:21.490 17:24:17 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = gpt ']' 00:05:21.490 17:24:17 blockdev_general -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:05:21.490 17:24:17 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:21.490 17:24:17 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.490 17:24:17 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:21.490 ************************************ 00:05:21.490 START TEST bdev_fio 00:05:21.490 ************************************ 00:05:21.490 17:24:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:05:21.490 17:24:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:05:21.490 17:24:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:05:21.490 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:05:21.490 17:24:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:05:21.490 17:24:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:05:21.490 17:24:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:05:21.490 17:24:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:05:21.490 17:24:17 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:05:21.490 17:24:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:21.490 17:24:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:05:21.490 17:24:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:05:21.490 17:24:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:05:21.490 17:24:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:05:21.490 17:24:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:05:21.490 17:24:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:05:21.490 17:24:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:05:21.490 17:24:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:21.490 17:24:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:05:21.490 17:24:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:05:21.490 17:24:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:05:21.490 17:24:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:05:21.490 17:24:17 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc0]' 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc0 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p0]' 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p0 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p1]' 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p1 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p0]' 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p0 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p1]' 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p1 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p2]' 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p2 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p3]' 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p3 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p4]' 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p4 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p5]' 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p5 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p6]' 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p6 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p7]' 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p7 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_TestPT]' 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=TestPT 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid0]' 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid0 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_concat0]' 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=concat0 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid1]' 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid1 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_AIO0]' 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=AIO0 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.426 17:24:18 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:05:22.426 ************************************ 00:05:22.426 START TEST bdev_fio_rw_verify 00:05:22.426 ************************************ 00:05:22.426 17:24:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:22.426 17:24:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:22.426 17:24:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:05:22.426 17:24:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:05:22.426 17:24:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:05:22.426 17:24:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:22.426 17:24:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:05:22.426 17:24:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:05:22.426 17:24:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:05:22.426 17:24:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:22.426 17:24:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:05:22.426 17:24:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:05:22.426 17:24:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:05:22.426 17:24:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:05:22.426 17:24:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:05:22.426 17:24:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:22.426 17:24:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:05:22.426 17:24:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:05:22.426 17:24:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:05:22.426 17:24:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:05:22.426 17:24:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:05:22.426 17:24:18 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:22.426 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:22.426 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:22.426 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:22.427 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:22.427 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:22.427 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:22.427 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:22.427 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:22.427 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:22.427 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:22.427 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:22.427 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:22.427 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:22.427 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:22.427 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:22.427 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:22.427 fio-3.35 00:05:22.427 Starting 16 threads 00:05:22.991 EAL: TSC is not safe to use in SMP mode 00:05:22.991 EAL: TSC is not invariant 00:05:35.191 00:05:35.191 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=101355: Mon Jul 15 17:24:29 2024 00:05:35.191 read: IOPS=233k, BW=912MiB/s (956MB/s)(9118MiB/10002msec) 00:05:35.191 slat (nsec): min=281, max=299230k, avg=4215.29, stdev=474186.67 00:05:35.191 clat (nsec): min=919, max=299259k, avg=45281.06, stdev=1401407.01 00:05:35.191 lat (usec): min=2, max=299260, avg=49.50, stdev=1479.54 00:05:35.191 clat percentiles (usec): 00:05:35.191 | 50.000th=[ 10], 99.000th=[ 717], 99.900th=[ 1045], 00:05:35.191 | 99.990th=[ 89654], 99.999th=[141558] 00:05:35.191 write: IOPS=398k, BW=1554MiB/s (1630MB/s)(15.0GiB/9907msec); 0 zone resets 00:05:35.191 slat (nsec): min=559, max=532698k, avg=21203.09, stdev=967694.49 00:05:35.191 clat (nsec): min=810, max=532795k, avg=101600.63, stdev=2086519.29 00:05:35.191 lat (usec): min=12, max=532806, avg=122.80, stdev=2300.89 00:05:35.191 clat percentiles (usec): 00:05:35.191 | 50.000th=[ 51], 99.000th=[ 701], 99.900th=[ 3130], 00:05:35.191 | 99.990th=[ 96994], 99.999th=[214959] 00:05:35.191 bw ( MiB/s): min= 590, max= 2520, per=100.00%, avg=1554.18, stdev=40.63, samples=299 00:05:35.191 iops : min=151165, max=645368, avg=397870.82, stdev=10400.87, samples=299 00:05:35.191 lat (nsec) : 1000=0.01% 00:05:35.191 lat (usec) : 2=0.04%, 4=11.25%, 10=17.47%, 20=21.61%, 50=16.50% 00:05:35.191 lat (usec) : 100=29.40%, 250=2.13%, 500=0.20%, 750=0.61%, 1000=0.61% 00:05:35.191 lat (msec) : 2=0.07%, 4=0.04%, 10=0.02%, 20=0.01%, 50=0.01% 00:05:35.191 lat (msec) : 100=0.02%, 250=0.01%, 500=0.01%, 750=0.01% 00:05:35.191 cpu : usr=55.44%, sys=3.21%, ctx=901297, majf=0, minf=624 00:05:35.191 IO depths : 1=12.5%, 2=25.0%, 4=49.9%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:05:35.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:05:35.191 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:05:35.191 issued rwts: total=2334219,3941697,0,0 short=0,0,0,0 dropped=0,0,0,0 00:05:35.191 latency : target=0, window=0, percentile=100.00%, depth=8 00:05:35.191 00:05:35.191 Run status group 0 (all jobs): 00:05:35.191 READ: bw=912MiB/s (956MB/s), 912MiB/s-912MiB/s (956MB/s-956MB/s), io=9118MiB (9561MB), run=10002-10002msec 00:05:35.191 WRITE: bw=1554MiB/s (1630MB/s), 1554MiB/s-1554MiB/s (1630MB/s-1630MB/s), io=15.0GiB (16.1GB), run=9907-9907msec 00:05:35.191 00:05:35.191 real 0m12.314s 00:05:35.191 user 1m33.005s 00:05:35.191 sys 0m7.355s 00:05:35.191 17:24:30 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.191 17:24:30 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:05:35.191 ************************************ 00:05:35.191 END TEST bdev_fio_rw_verify 00:05:35.191 ************************************ 00:05:35.191 17:24:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:05:35.191 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:05:35.191 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:35.191 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:05:35.191 17:24:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:35.192 17:24:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:05:35.192 17:24:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:05:35.192 17:24:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:05:35.192 17:24:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:05:35.192 17:24:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:05:35.192 17:24:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:05:35.192 17:24:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:05:35.192 17:24:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:35.192 17:24:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:05:35.192 17:24:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:05:35.192 17:24:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:05:35.192 17:24:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:05:35.192 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:05:35.193 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "0e2c9f0b-42cf-11ef-96ac-773515fba644"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "0e2c9f0b-42cf-11ef-96ac-773515fba644",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "69a37e5f-1f50-c75c-aeb8-4d082136749e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "69a37e5f-1f50-c75c-aeb8-4d082136749e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "2d6c186d-cd4f-7e5a-9799-7a3b1a495d86"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "2d6c186d-cd4f-7e5a-9799-7a3b1a495d86",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "3e0107c8-8794-6f50-9c31-02e2412d6e93"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3e0107c8-8794-6f50-9c31-02e2412d6e93",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "07756ac1-39b6-0358-8acf-c192a04ad038"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "07756ac1-39b6-0358-8acf-c192a04ad038",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "fd4d53e7-1cc6-fc52-99e1-f0f77c8adc29"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "fd4d53e7-1cc6-fc52-99e1-f0f77c8adc29",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "2395886e-90e0-d652-b0f1-98bb4d75905c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2395886e-90e0-d652-b0f1-98bb4d75905c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "f1d9c1be-2cb0-d553-adc7-ad473ea7c421"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f1d9c1be-2cb0-d553-adc7-ad473ea7c421",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "8dc10afb-e9f9-715d-ad42-11aed0cd08b5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8dc10afb-e9f9-715d-ad42-11aed0cd08b5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "d33fe42a-3c4d-1257-af59-2c46b4b54e24"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d33fe42a-3c4d-1257-af59-2c46b4b54e24",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "1de999e9-eca3-ce54-97e2-d681edd87219"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1de999e9-eca3-ce54-97e2-d681edd87219",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "a7e37591-206d-2857-95d5-d18ce9178f35"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a7e37591-206d-2857-95d5-d18ce9178f35",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "0e3a18ec-42cf-11ef-96ac-773515fba644"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "0e3a18ec-42cf-11ef-96ac-773515fba644",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "0e3a18ec-42cf-11ef-96ac-773515fba644",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "0e31808f-42cf-11ef-96ac-773515fba644",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "0e32b908-42cf-11ef-96ac-773515fba644",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "0e3b4563-42cf-11ef-96ac-773515fba644"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "0e3b4563-42cf-11ef-96ac-773515fba644",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "0e3b4563-42cf-11ef-96ac-773515fba644",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "0e33f18a-42cf-11ef-96ac-773515fba644",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "0e352a0b-42cf-11ef-96ac-773515fba644",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "0e3c7dd7-42cf-11ef-96ac-773515fba644"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "0e3c7dd7-42cf-11ef-96ac-773515fba644",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "0e3c7dd7-42cf-11ef-96ac-773515fba644",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "0e366286-42cf-11ef-96ac-773515fba644",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "0e379b09-42cf-11ef-96ac-773515fba644",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "0e4509e0-42cf-11ef-96ac-773515fba644"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "0e4509e0-42cf-11ef-96ac-773515fba644",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:05:35.193 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n Malloc0 00:05:35.193 Malloc1p0 00:05:35.193 Malloc1p1 00:05:35.193 Malloc2p0 00:05:35.193 Malloc2p1 00:05:35.193 Malloc2p2 00:05:35.193 Malloc2p3 00:05:35.193 Malloc2p4 00:05:35.193 Malloc2p5 00:05:35.193 Malloc2p6 00:05:35.193 Malloc2p7 00:05:35.193 TestPT 00:05:35.193 raid0 00:05:35.193 concat0 ]] 00:05:35.193 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "0e2c9f0b-42cf-11ef-96ac-773515fba644"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "0e2c9f0b-42cf-11ef-96ac-773515fba644",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "69a37e5f-1f50-c75c-aeb8-4d082136749e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "69a37e5f-1f50-c75c-aeb8-4d082136749e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "2d6c186d-cd4f-7e5a-9799-7a3b1a495d86"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "2d6c186d-cd4f-7e5a-9799-7a3b1a495d86",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "3e0107c8-8794-6f50-9c31-02e2412d6e93"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "3e0107c8-8794-6f50-9c31-02e2412d6e93",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "07756ac1-39b6-0358-8acf-c192a04ad038"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "07756ac1-39b6-0358-8acf-c192a04ad038",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "fd4d53e7-1cc6-fc52-99e1-f0f77c8adc29"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "fd4d53e7-1cc6-fc52-99e1-f0f77c8adc29",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "2395886e-90e0-d652-b0f1-98bb4d75905c"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2395886e-90e0-d652-b0f1-98bb4d75905c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "f1d9c1be-2cb0-d553-adc7-ad473ea7c421"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "f1d9c1be-2cb0-d553-adc7-ad473ea7c421",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "8dc10afb-e9f9-715d-ad42-11aed0cd08b5"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "8dc10afb-e9f9-715d-ad42-11aed0cd08b5",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "d33fe42a-3c4d-1257-af59-2c46b4b54e24"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "d33fe42a-3c4d-1257-af59-2c46b4b54e24",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "1de999e9-eca3-ce54-97e2-d681edd87219"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "1de999e9-eca3-ce54-97e2-d681edd87219",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "a7e37591-206d-2857-95d5-d18ce9178f35"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "a7e37591-206d-2857-95d5-d18ce9178f35",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "0e3a18ec-42cf-11ef-96ac-773515fba644"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "0e3a18ec-42cf-11ef-96ac-773515fba644",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "0e3a18ec-42cf-11ef-96ac-773515fba644",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "0e31808f-42cf-11ef-96ac-773515fba644",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "0e32b908-42cf-11ef-96ac-773515fba644",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "0e3b4563-42cf-11ef-96ac-773515fba644"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "0e3b4563-42cf-11ef-96ac-773515fba644",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "0e3b4563-42cf-11ef-96ac-773515fba644",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "0e33f18a-42cf-11ef-96ac-773515fba644",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "0e352a0b-42cf-11ef-96ac-773515fba644",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "0e3c7dd7-42cf-11ef-96ac-773515fba644"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "0e3c7dd7-42cf-11ef-96ac-773515fba644",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "0e3c7dd7-42cf-11ef-96ac-773515fba644",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "0e366286-42cf-11ef-96ac-773515fba644",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "0e379b09-42cf-11ef-96ac-773515fba644",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "0e4509e0-42cf-11ef-96ac-773515fba644"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "0e4509e0-42cf-11ef-96ac-773515fba644",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc0]' 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc0 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p0]' 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p0 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p1]' 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p1 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p0]' 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p0 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p1]' 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p1 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p2]' 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p2 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p3]' 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p3 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p4]' 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p4 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p5]' 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p5 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p6]' 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p6 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p7]' 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p7 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_TestPT]' 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=TestPT 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_raid0]' 00:05:35.194 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=raid0 00:05:35.195 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:35.195 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_concat0]' 00:05:35.195 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=concat0 00:05:35.195 17:24:30 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:35.195 17:24:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:35.195 17:24:30 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.195 17:24:30 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:05:35.195 ************************************ 00:05:35.195 START TEST bdev_fio_trim 00:05:35.195 ************************************ 00:05:35.195 17:24:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:35.195 17:24:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:35.195 17:24:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:05:35.195 17:24:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:05:35.195 17:24:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:05:35.195 17:24:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:35.195 17:24:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:05:35.195 17:24:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:05:35.195 17:24:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:05:35.195 17:24:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:35.195 17:24:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:05:35.195 17:24:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:05:35.195 17:24:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:05:35.195 17:24:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:05:35.195 17:24:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:05:35.195 17:24:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:35.195 17:24:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:05:35.195 17:24:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:05:35.195 17:24:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:05:35.195 17:24:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:05:35.195 17:24:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:05:35.195 17:24:30 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:35.195 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:35.195 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:35.195 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:35.195 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:35.195 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:35.195 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:35.195 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:35.195 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:35.195 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:35.195 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:35.195 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:35.195 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:35.195 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:35.195 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:35.195 fio-3.35 00:05:35.195 Starting 14 threads 00:05:35.453 EAL: TSC is not safe to use in SMP mode 00:05:35.453 EAL: TSC is not invariant 00:05:47.657 00:05:47.657 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=101374: Mon Jul 15 17:24:41 2024 00:05:47.657 write: IOPS=2347k, BW=9167MiB/s (9613MB/s)(89.5GiB/10001msec); 0 zone resets 00:05:47.657 slat (nsec): min=282, max=1259.9M, avg=1579.51, stdev=333463.55 00:05:47.657 clat (nsec): min=1242, max=2299.7M, avg=16414.04, stdev=1746797.16 00:05:47.657 lat (usec): min=2, max=2299.7k, avg=17.99, stdev=1778.34 00:05:47.657 clat percentiles (usec): 00:05:47.657 | 50.000th=[ 7], 99.000th=[ 19], 99.900th=[ 955], 99.990th=[ 963], 00:05:47.657 | 99.999th=[94897] 00:05:47.657 bw ( MiB/s): min= 3412, max=14646, per=100.00%, avg=9365.41, stdev=272.18, samples=256 00:05:47.657 iops : min=873534, max=3749577, avg=2397546.32, stdev=69678.20, samples=256 00:05:47.657 trim: IOPS=2347k, BW=9167MiB/s (9613MB/s)(89.5GiB/10001msec); 0 zone resets 00:05:47.657 slat (nsec): min=567, max=325524k, avg=1445.38, stdev=194406.83 00:05:47.657 clat (nsec): min=396, max=1259.9M, avg=11832.09, stdev=714047.35 00:05:47.657 lat (nsec): min=1693, max=1259.9M, avg=13277.46, stdev=740046.30 00:05:47.657 clat percentiles (usec): 00:05:47.657 | 50.000th=[ 8], 99.000th=[ 20], 99.900th=[ 28], 99.990th=[ 58], 00:05:47.657 | 99.999th=[94897] 00:05:47.657 bw ( MiB/s): min= 3412, max=14646, per=100.00%, avg=9365.42, stdev=272.18, samples=256 00:05:47.657 iops : min=873536, max=3749577, avg=2397548.21, stdev=69678.15, samples=256 00:05:47.657 lat (nsec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:05:47.657 lat (usec) : 2=0.08%, 4=22.01%, 10=56.98%, 20=20.02%, 50=0.67% 00:05:47.657 lat (usec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.18% 00:05:47.657 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01% 00:05:47.657 lat (msec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 2000=0.01% 00:05:47.657 lat (msec) : >=2000=0.01% 00:05:47.657 cpu : usr=63.21%, sys=4.39%, ctx=1172749, majf=0, minf=0 00:05:47.657 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:05:47.657 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:05:47.657 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:05:47.657 issued rwts: total=0,23470932,23470936,0 short=0,0,0,0 dropped=0,0,0,0 00:05:47.657 latency : target=0, window=0, percentile=100.00%, depth=8 00:05:47.657 00:05:47.657 Run status group 0 (all jobs): 00:05:47.657 WRITE: bw=9167MiB/s (9613MB/s), 9167MiB/s-9167MiB/s (9613MB/s-9613MB/s), io=89.5GiB (96.1GB), run=10001-10001msec 00:05:47.657 TRIM: bw=9167MiB/s (9613MB/s), 9167MiB/s-9167MiB/s (9613MB/s-9613MB/s), io=89.5GiB (96.1GB), run=10001-10001msec 00:05:47.657 00:05:47.657 real 0m12.437s 00:05:47.657 user 1m34.148s 00:05:47.657 sys 0m9.035s 00:05:47.657 17:24:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.657 17:24:42 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:05:47.657 ************************************ 00:05:47.657 END TEST bdev_fio_trim 00:05:47.657 ************************************ 00:05:47.657 17:24:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:05:47.657 17:24:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:05:47.657 17:24:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:47.657 /home/vagrant/spdk_repo/spdk 00:05:47.657 17:24:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:05:47.657 17:24:42 blockdev_general.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:05:47.657 00:05:47.657 real 0m25.715s 00:05:47.657 user 3m7.421s 00:05:47.657 sys 0m17.051s 00:05:47.657 17:24:42 blockdev_general.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:47.657 ************************************ 00:05:47.657 END TEST bdev_fio 00:05:47.657 ************************************ 00:05:47.657 17:24:42 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:05:47.657 17:24:42 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:05:47.657 17:24:42 blockdev_general -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:47.657 17:24:42 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:05:47.657 17:24:42 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:05:47.657 17:24:42 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.657 17:24:42 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:47.657 ************************************ 00:05:47.657 START TEST bdev_verify 00:05:47.657 ************************************ 00:05:47.657 17:24:42 blockdev_general.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:05:47.657 [2024-07-15 17:24:42.997417] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:05:47.657 [2024-07-15 17:24:42.997703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:47.914 EAL: TSC is not safe to use in SMP mode 00:05:47.914 EAL: TSC is not invariant 00:05:47.914 [2024-07-15 17:24:43.586058] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.914 [2024-07-15 17:24:43.688637] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:47.914 [2024-07-15 17:24:43.688707] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:05:47.914 [2024-07-15 17:24:43.692025] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.914 [2024-07-15 17:24:43.692012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.172 [2024-07-15 17:24:43.753640] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:48.172 [2024-07-15 17:24:43.753705] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:48.172 [2024-07-15 17:24:43.761614] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:48.172 [2024-07-15 17:24:43.761648] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:48.172 [2024-07-15 17:24:43.769635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:48.172 [2024-07-15 17:24:43.769670] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:05:48.172 [2024-07-15 17:24:43.769682] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:05:48.172 [2024-07-15 17:24:43.817648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:48.172 [2024-07-15 17:24:43.817717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:48.172 [2024-07-15 17:24:43.817731] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20699d636800 00:05:48.172 [2024-07-15 17:24:43.817741] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:48.172 [2024-07-15 17:24:43.818191] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:48.172 [2024-07-15 17:24:43.818221] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:48.172 Running I/O for 5 seconds... 00:05:53.436 00:05:53.436 Latency(us) 00:05:53.436 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:05:53.436 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:53.436 Verification LBA range: start 0x0 length 0x1000 00:05:53.436 Malloc0 : 5.02 6547.57 25.58 0.00 0.00 19541.68 64.70 50760.68 00:05:53.436 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:53.436 Verification LBA range: start 0x1000 length 0x1000 00:05:53.436 Malloc0 : 5.03 229.76 0.90 0.00 0.00 556252.25 53.53 1243040.97 00:05:53.436 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:53.436 Verification LBA range: start 0x0 length 0x800 00:05:53.436 Malloc1p0 : 5.02 5172.49 20.21 0.00 0.00 24731.62 266.24 23473.84 00:05:53.436 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:53.436 Verification LBA range: start 0x800 length 0x800 00:05:53.436 Malloc1p0 : 5.02 5939.43 23.20 0.00 0.00 21536.81 331.40 25380.34 00:05:53.436 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:53.436 Verification LBA range: start 0x0 length 0x800 00:05:53.436 Malloc1p1 : 5.02 5172.12 20.20 0.00 0.00 24728.34 273.69 22878.05 00:05:53.436 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:53.436 Verification LBA range: start 0x800 length 0x800 00:05:53.436 Malloc1p1 : 5.02 5938.70 23.20 0.00 0.00 21534.17 327.68 24784.56 00:05:53.436 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:53.436 Verification LBA range: start 0x0 length 0x200 00:05:53.436 Malloc2p0 : 5.02 5171.64 20.20 0.00 0.00 24725.49 249.48 23592.99 00:05:53.436 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:53.436 Verification LBA range: start 0x200 length 0x200 00:05:53.436 Malloc2p0 : 5.02 5938.32 23.20 0.00 0.00 21530.87 368.64 24546.25 00:05:53.436 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:53.436 Verification LBA range: start 0x0 length 0x200 00:05:53.436 Malloc2p1 : 5.02 5171.24 20.20 0.00 0.00 24723.38 310.92 23473.84 00:05:53.436 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:53.436 Verification LBA range: start 0x200 length 0x200 00:05:53.436 Malloc2p1 : 5.02 5937.85 23.19 0.00 0.00 21527.23 318.37 23950.46 00:05:53.436 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:53.436 Verification LBA range: start 0x0 length 0x200 00:05:53.436 Malloc2p2 : 5.03 5170.84 20.20 0.00 0.00 24720.45 260.65 23116.37 00:05:53.436 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:53.436 Verification LBA range: start 0x200 length 0x200 00:05:53.436 Malloc2p2 : 5.02 5937.31 23.19 0.00 0.00 21524.11 331.40 22043.96 00:05:53.436 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:53.436 Verification LBA range: start 0x0 length 0x200 00:05:53.436 Malloc2p3 : 5.03 5170.45 20.20 0.00 0.00 24718.02 251.35 22758.90 00:05:53.436 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:53.436 Verification LBA range: start 0x200 length 0x200 00:05:53.436 Malloc2p3 : 5.02 5936.76 23.19 0.00 0.00 21520.68 342.58 21448.18 00:05:53.436 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:53.436 Verification LBA range: start 0x0 length 0x200 00:05:53.436 Malloc2p4 : 5.03 5169.98 20.20 0.00 0.00 24715.24 301.61 22401.43 00:05:53.436 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:53.436 Verification LBA range: start 0x200 length 0x200 00:05:53.436 Malloc2p4 : 5.02 5936.35 23.19 0.00 0.00 21517.72 428.22 20494.92 00:05:53.436 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:53.436 Verification LBA range: start 0x0 length 0x200 00:05:53.436 Malloc2p5 : 5.03 5169.61 20.19 0.00 0.00 24712.64 281.13 22401.43 00:05:53.436 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:53.436 Verification LBA range: start 0x200 length 0x200 00:05:53.436 Malloc2p5 : 5.02 5935.88 23.19 0.00 0.00 21513.37 342.58 21805.65 00:05:53.436 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:53.436 Verification LBA range: start 0x0 length 0x200 00:05:53.436 Malloc2p6 : 5.03 5169.22 20.19 0.00 0.00 24709.48 281.13 22401.43 00:05:53.436 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:53.436 Verification LBA range: start 0x200 length 0x200 00:05:53.436 Malloc2p6 : 5.02 5935.32 23.18 0.00 0.00 21510.12 322.10 21329.02 00:05:53.436 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:53.436 Verification LBA range: start 0x0 length 0x200 00:05:53.436 Malloc2p7 : 5.03 5168.75 20.19 0.00 0.00 24706.58 264.38 23116.37 00:05:53.436 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:53.436 Verification LBA range: start 0x200 length 0x200 00:05:53.436 Malloc2p7 : 5.03 5934.82 23.18 0.00 0.00 21507.13 355.61 22282.27 00:05:53.436 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:53.436 Verification LBA range: start 0x0 length 0x1000 00:05:53.436 TestPT : 5.03 5142.99 20.09 0.00 0.00 24813.36 737.28 22163.12 00:05:53.436 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:53.436 Verification LBA range: start 0x1000 length 0x1000 00:05:53.436 TestPT : 5.03 4990.29 19.49 0.00 0.00 25571.04 1593.72 78166.69 00:05:53.436 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:53.436 Verification LBA range: start 0x0 length 0x2000 00:05:53.436 raid0 : 5.03 5168.09 20.19 0.00 0.00 24701.02 281.13 21329.02 00:05:53.436 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:53.436 Verification LBA range: start 0x2000 length 0x2000 00:05:53.436 raid0 : 5.03 5933.85 23.18 0.00 0.00 21501.03 458.01 23354.68 00:05:53.436 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:53.436 Verification LBA range: start 0x0 length 0x2000 00:05:53.436 concat0 : 5.03 5167.72 20.19 0.00 0.00 24698.10 275.55 22163.12 00:05:53.436 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:53.436 Verification LBA range: start 0x2000 length 0x2000 00:05:53.436 concat0 : 5.03 5933.48 23.18 0.00 0.00 21495.54 513.86 24069.62 00:05:53.436 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:53.436 Verification LBA range: start 0x0 length 0x1000 00:05:53.436 raid1 : 5.03 5167.28 20.18 0.00 0.00 24694.21 312.79 23473.84 00:05:53.436 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:53.436 Verification LBA range: start 0x1000 length 0x1000 00:05:53.436 raid1 : 5.03 5932.92 23.18 0.00 0.00 21490.31 398.43 25142.03 00:05:53.436 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:05:53.436 Verification LBA range: start 0x0 length 0x4e2 00:05:53.436 AIO0 : 5.22 708.91 2.77 0.00 0.00 178343.19 1117.09 373674.89 00:05:53.436 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:05:53.436 Verification LBA range: start 0x4e2 length 0x4e2 00:05:53.436 AIO0 : 5.22 713.82 2.79 0.00 0.00 177037.17 18826.73 436589.54 00:05:53.436 =================================================================================================================== 00:05:53.436 Total : 162713.73 635.60 0.00 0.00 25159.99 53.53 1243040.97 00:05:53.695 00:05:53.695 real 0m6.442s 00:05:53.695 user 0m10.393s 00:05:53.695 sys 0m0.715s 00:05:53.695 17:24:49 blockdev_general.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.695 ************************************ 00:05:53.695 END TEST bdev_verify 00:05:53.695 ************************************ 00:05:53.695 17:24:49 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:05:53.695 17:24:49 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:05:53.695 17:24:49 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:05:53.695 17:24:49 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:05:53.695 17:24:49 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.695 17:24:49 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:53.695 ************************************ 00:05:53.695 START TEST bdev_verify_big_io 00:05:53.695 ************************************ 00:05:53.695 17:24:49 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:05:53.695 [2024-07-15 17:24:49.492105] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:05:53.695 [2024-07-15 17:24:49.492477] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:54.628 EAL: TSC is not safe to use in SMP mode 00:05:54.628 EAL: TSC is not invariant 00:05:54.628 [2024-07-15 17:24:50.226141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.628 [2024-07-15 17:24:50.323620] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:54.628 [2024-07-15 17:24:50.323697] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:05:54.628 [2024-07-15 17:24:50.327161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.628 [2024-07-15 17:24:50.327148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.628 [2024-07-15 17:24:50.387111] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:54.628 [2024-07-15 17:24:50.387179] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:54.628 [2024-07-15 17:24:50.395095] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:54.628 [2024-07-15 17:24:50.395138] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:54.628 [2024-07-15 17:24:50.403115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:54.628 [2024-07-15 17:24:50.403158] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:05:54.628 [2024-07-15 17:24:50.403169] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:05:54.628 [2024-07-15 17:24:50.451125] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:54.628 [2024-07-15 17:24:50.451198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:54.628 [2024-07-15 17:24:50.451222] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x352555036800 00:05:54.628 [2024-07-15 17:24:50.451232] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:54.628 [2024-07-15 17:24:50.451666] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:54.628 [2024-07-15 17:24:50.451690] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:54.888 [2024-07-15 17:24:50.552776] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:05:54.888 [2024-07-15 17:24:50.553051] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:05:54.888 [2024-07-15 17:24:50.553248] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:05:54.888 [2024-07-15 17:24:50.553461] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:05:54.888 [2024-07-15 17:24:50.553649] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:05:54.888 [2024-07-15 17:24:50.553837] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:05:54.888 [2024-07-15 17:24:50.554030] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:05:54.888 [2024-07-15 17:24:50.554208] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:05:54.888 [2024-07-15 17:24:50.554377] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:05:54.888 [2024-07-15 17:24:50.554557] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:05:54.888 [2024-07-15 17:24:50.554744] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:05:54.888 [2024-07-15 17:24:50.554932] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:05:54.888 [2024-07-15 17:24:50.555137] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:05:54.888 [2024-07-15 17:24:50.555361] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:05:54.888 [2024-07-15 17:24:50.555586] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:05:54.888 [2024-07-15 17:24:50.555803] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:05:54.888 [2024-07-15 17:24:50.557824] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:05:54.888 [2024-07-15 17:24:50.558046] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:05:54.888 Running I/O for 5 seconds... 00:06:00.177 00:06:00.177 Latency(us) 00:06:00.177 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:00.177 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:00.177 Verification LBA range: start 0x0 length 0x100 00:06:00.177 Malloc0 : 5.05 4158.87 259.93 0.00 0.00 30687.36 90.76 99614.86 00:06:00.177 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:00.177 Verification LBA range: start 0x100 length 0x100 00:06:00.177 Malloc0 : 5.07 3255.29 203.46 0.00 0.00 39222.23 109.38 123922.80 00:06:00.178 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:00.178 Verification LBA range: start 0x0 length 0x80 00:06:00.178 Malloc1p0 : 5.10 536.32 33.52 0.00 0.00 237021.14 389.12 303134.22 00:06:00.178 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:00.178 Verification LBA range: start 0x80 length 0x80 00:06:00.178 Malloc1p0 : 5.09 1544.86 96.55 0.00 0.00 82370.36 1519.25 158239.88 00:06:00.178 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:00.178 Verification LBA range: start 0x0 length 0x80 00:06:00.178 Malloc1p1 : 5.10 536.30 33.52 0.00 0.00 236554.03 454.28 293601.70 00:06:00.178 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:00.178 Verification LBA range: start 0x80 length 0x80 00:06:00.178 Malloc1p1 : 5.11 429.34 26.83 0.00 0.00 296080.03 547.38 350796.84 00:06:00.178 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:00.178 Verification LBA range: start 0x0 length 0x20 00:06:00.178 Malloc2p0 : 5.08 523.08 32.69 0.00 0.00 60694.17 275.55 112007.14 00:06:00.178 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:00.178 Verification LBA range: start 0x20 length 0x20 00:06:00.178 Malloc2p0 : 5.09 412.08 25.75 0.00 0.00 77066.61 351.88 115343.53 00:06:00.178 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:00.178 Verification LBA range: start 0x0 length 0x20 00:06:00.178 Malloc2p1 : 5.08 523.05 32.69 0.00 0.00 60656.28 275.55 110577.26 00:06:00.178 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:00.178 Verification LBA range: start 0x20 length 0x20 00:06:00.178 Malloc2p1 : 5.09 412.05 25.75 0.00 0.00 77028.79 355.61 114390.27 00:06:00.178 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:00.178 Verification LBA range: start 0x0 length 0x20 00:06:00.178 Malloc2p2 : 5.08 523.02 32.69 0.00 0.00 60624.96 288.58 109147.39 00:06:00.178 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:00.178 Verification LBA range: start 0x20 length 0x20 00:06:00.178 Malloc2p2 : 5.09 412.02 25.75 0.00 0.00 76984.11 361.19 113437.02 00:06:00.178 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:00.178 Verification LBA range: start 0x0 length 0x20 00:06:00.178 Malloc2p3 : 5.08 522.99 32.69 0.00 0.00 60595.87 284.86 108194.13 00:06:00.178 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:00.178 Verification LBA range: start 0x20 length 0x20 00:06:00.178 Malloc2p3 : 5.09 411.99 25.75 0.00 0.00 76957.80 318.37 112483.77 00:06:00.178 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:00.178 Verification LBA range: start 0x0 length 0x20 00:06:00.178 Malloc2p4 : 5.08 522.97 32.69 0.00 0.00 60555.69 296.03 106764.25 00:06:00.178 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:00.178 Verification LBA range: start 0x20 length 0x20 00:06:00.178 Malloc2p4 : 5.09 411.96 25.75 0.00 0.00 76923.55 342.58 111530.52 00:06:00.178 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:00.178 Verification LBA range: start 0x0 length 0x20 00:06:00.178 Malloc2p5 : 5.08 522.95 32.68 0.00 0.00 60533.41 307.20 104857.75 00:06:00.178 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:00.178 Verification LBA range: start 0x20 length 0x20 00:06:00.178 Malloc2p5 : 5.09 411.94 25.75 0.00 0.00 76896.89 361.19 110577.26 00:06:00.178 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:00.178 Verification LBA range: start 0x0 length 0x20 00:06:00.178 Malloc2p6 : 5.08 522.92 32.68 0.00 0.00 60508.81 299.75 103427.87 00:06:00.178 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:00.178 Verification LBA range: start 0x20 length 0x20 00:06:00.178 Malloc2p6 : 5.09 411.91 25.74 0.00 0.00 76873.99 338.85 109624.01 00:06:00.178 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:00.178 Verification LBA range: start 0x0 length 0x20 00:06:00.178 Malloc2p7 : 5.08 522.90 32.68 0.00 0.00 60479.16 284.86 101997.99 00:06:00.178 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:00.178 Verification LBA range: start 0x20 length 0x20 00:06:00.178 Malloc2p7 : 5.09 411.88 25.74 0.00 0.00 76844.74 320.23 108670.76 00:06:00.178 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:00.178 Verification LBA range: start 0x0 length 0x100 00:06:00.178 TestPT : 5.13 536.89 33.56 0.00 0.00 234238.59 2517.18 220201.28 00:06:00.178 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:00.178 Verification LBA range: start 0x100 length 0x100 00:06:00.178 TestPT : 5.20 269.58 16.85 0.00 0.00 466015.25 6523.82 495691.18 00:06:00.178 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:00.178 Verification LBA range: start 0x0 length 0x200 00:06:00.178 raid0 : 5.10 542.28 33.89 0.00 0.00 232487.31 398.43 265004.13 00:06:00.178 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:00.178 Verification LBA range: start 0x200 length 0x200 00:06:00.178 raid0 : 5.11 429.32 26.83 0.00 0.00 294066.17 510.14 329825.29 00:06:00.178 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:00.178 Verification LBA range: start 0x0 length 0x200 00:06:00.178 concat0 : 5.10 542.26 33.89 0.00 0.00 232059.47 351.88 257378.11 00:06:00.178 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:00.178 Verification LBA range: start 0x200 length 0x200 00:06:00.178 concat0 : 5.11 429.30 26.83 0.00 0.00 293558.72 487.80 322199.27 00:06:00.178 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:00.178 Verification LBA range: start 0x0 length 0x100 00:06:00.178 raid1 : 5.10 545.57 34.10 0.00 0.00 230239.44 426.36 247845.59 00:06:00.178 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:00.178 Verification LBA range: start 0x100 length 0x100 00:06:00.178 raid1 : 5.11 432.37 27.02 0.00 0.00 290957.93 655.36 310760.24 00:06:00.178 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:06:00.178 Verification LBA range: start 0x0 length 0x4e 00:06:00.178 AIO0 : 5.10 551.40 34.46 0.00 0.00 138723.55 565.99 152520.36 00:06:00.178 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:06:00.178 Verification LBA range: start 0x4e length 0x4e 00:06:00.178 AIO0 : 5.10 428.36 26.77 0.00 0.00 178725.98 491.52 187790.70 00:06:00.178 =================================================================================================================== 00:06:00.178 Total : 22648.00 1415.50 0.00 0.00 107698.82 90.76 495691.18 00:06:00.436 00:06:00.436 real 0m6.582s 00:06:00.436 user 0m11.268s 00:06:00.436 sys 0m0.905s 00:06:00.436 17:24:56 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.436 17:24:56 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:00.436 ************************************ 00:06:00.436 END TEST bdev_verify_big_io 00:06:00.436 ************************************ 00:06:00.436 17:24:56 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:06:00.436 17:24:56 blockdev_general -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:00.436 17:24:56 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:00.436 17:24:56 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.436 17:24:56 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:00.436 ************************************ 00:06:00.437 START TEST bdev_write_zeroes 00:06:00.437 ************************************ 00:06:00.437 17:24:56 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:00.437 [2024-07-15 17:24:56.126435] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:06:00.437 [2024-07-15 17:24:56.126745] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:01.076 EAL: TSC is not safe to use in SMP mode 00:06:01.076 EAL: TSC is not invariant 00:06:01.076 [2024-07-15 17:24:56.814167] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.335 [2024-07-15 17:24:56.922309] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:01.335 [2024-07-15 17:24:56.924825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.335 [2024-07-15 17:24:56.986521] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:06:01.335 [2024-07-15 17:24:56.986601] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:06:01.335 [2024-07-15 17:24:56.994503] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:06:01.335 [2024-07-15 17:24:56.994540] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:06:01.335 [2024-07-15 17:24:57.002518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:01.335 [2024-07-15 17:24:57.002555] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:06:01.335 [2024-07-15 17:24:57.002565] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:01.335 [2024-07-15 17:24:57.050535] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:01.335 [2024-07-15 17:24:57.050614] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:01.335 [2024-07-15 17:24:57.050627] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12faf9036800 00:06:01.335 [2024-07-15 17:24:57.050635] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:01.335 [2024-07-15 17:24:57.051266] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:01.335 [2024-07-15 17:24:57.051295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:06:01.594 Running I/O for 1 seconds... 00:06:02.530 00:06:02.530 Latency(us) 00:06:02.530 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:02.530 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:02.530 Malloc0 : 1.01 23868.71 93.24 0.00 0.00 5361.66 200.15 11081.56 00:06:02.530 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:02.530 Malloc1p0 : 1.01 23860.85 93.21 0.00 0.00 5360.87 222.49 11081.56 00:06:02.530 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:02.530 Malloc1p1 : 1.01 23856.59 93.19 0.00 0.00 5358.50 215.97 11021.98 00:06:02.530 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:02.530 Malloc2p0 : 1.01 23854.15 93.18 0.00 0.00 5356.11 227.14 10902.82 00:06:02.530 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:02.530 Malloc2p1 : 1.01 23849.43 93.16 0.00 0.00 5354.30 230.87 10843.24 00:06:02.530 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:02.530 Malloc2p2 : 1.01 23847.11 93.15 0.00 0.00 5351.99 219.69 10902.82 00:06:02.530 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:02.530 Malloc2p3 : 1.01 23844.89 93.14 0.00 0.00 5349.58 218.76 11021.98 00:06:02.530 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:02.530 Malloc2p4 : 1.01 23841.72 93.13 0.00 0.00 5348.19 233.66 11200.71 00:06:02.530 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:02.530 Malloc2p5 : 1.01 23838.32 93.12 0.00 0.00 5345.68 224.35 11141.14 00:06:02.530 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:02.530 Malloc2p6 : 1.01 23836.08 93.11 0.00 0.00 5343.42 222.49 11081.56 00:06:02.530 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:02.530 Malloc2p7 : 1.01 23833.32 93.10 0.00 0.00 5341.57 229.00 11081.56 00:06:02.530 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:02.530 TestPT : 1.01 23830.91 93.09 0.00 0.00 5338.86 243.90 10724.09 00:06:02.530 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:02.530 raid0 : 1.01 23821.62 93.05 0.00 0.00 5336.91 314.65 10783.67 00:06:02.530 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:02.530 concat0 : 1.01 23914.52 93.42 0.00 0.00 5313.13 299.75 10843.24 00:06:02.530 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:02.530 raid1 : 1.01 23910.76 93.40 0.00 0.00 5309.34 387.26 11021.98 00:06:02.530 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:02.530 AIO0 : 1.06 2262.23 8.84 0.00 0.00 54585.44 418.91 162052.89 00:06:02.530 =================================================================================================================== 00:06:02.530 Total : 360071.21 1406.53 0.00 0.00 5670.73 200.15 162052.89 00:06:02.788 00:06:02.788 real 0m2.458s 00:06:02.788 user 0m1.582s 00:06:02.788 sys 0m0.763s 00:06:02.788 17:24:58 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.788 ************************************ 00:06:02.788 END TEST bdev_write_zeroes 00:06:02.788 ************************************ 00:06:02.788 17:24:58 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:02.788 17:24:58 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:06:02.788 17:24:58 blockdev_general -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:02.788 17:24:58 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:02.788 17:24:58 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.788 17:24:58 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:03.046 ************************************ 00:06:03.046 START TEST bdev_json_nonenclosed 00:06:03.046 ************************************ 00:06:03.046 17:24:58 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:03.046 [2024-07-15 17:24:58.632436] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:06:03.046 [2024-07-15 17:24:58.632580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:03.612 EAL: TSC is not safe to use in SMP mode 00:06:03.612 EAL: TSC is not invariant 00:06:03.612 [2024-07-15 17:24:59.164668] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.612 [2024-07-15 17:24:59.269411] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:03.612 [2024-07-15 17:24:59.271956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.612 [2024-07-15 17:24:59.272011] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:03.612 [2024-07-15 17:24:59.272023] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:03.612 [2024-07-15 17:24:59.272032] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:03.612 00:06:03.612 real 0m0.807s 00:06:03.612 user 0m0.215s 00:06:03.612 sys 0m0.590s 00:06:03.612 17:24:59 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:06:03.612 17:24:59 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.612 17:24:59 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:03.612 ************************************ 00:06:03.612 END TEST bdev_json_nonenclosed 00:06:03.612 ************************************ 00:06:03.870 17:24:59 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:06:03.870 17:24:59 blockdev_general -- bdev/blockdev.sh@782 -- # true 00:06:03.870 17:24:59 blockdev_general -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:03.870 17:24:59 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:03.870 17:24:59 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.870 17:24:59 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:03.870 ************************************ 00:06:03.870 START TEST bdev_json_nonarray 00:06:03.870 ************************************ 00:06:03.870 17:24:59 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:03.870 [2024-07-15 17:24:59.490655] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:06:03.870 [2024-07-15 17:24:59.490931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:04.438 EAL: TSC is not safe to use in SMP mode 00:06:04.438 EAL: TSC is not invariant 00:06:04.438 [2024-07-15 17:25:00.028131] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.438 [2024-07-15 17:25:00.130310] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:04.438 [2024-07-15 17:25:00.132800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.438 [2024-07-15 17:25:00.132878] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:04.438 [2024-07-15 17:25:00.132892] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:04.438 [2024-07-15 17:25:00.132902] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:04.438 00:06:04.438 real 0m0.769s 00:06:04.438 user 0m0.176s 00:06:04.438 sys 0m0.591s 00:06:04.438 17:25:00 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:06:04.438 17:25:00 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.438 17:25:00 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:04.438 ************************************ 00:06:04.438 END TEST bdev_json_nonarray 00:06:04.438 ************************************ 00:06:04.696 17:25:00 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:06:04.696 17:25:00 blockdev_general -- bdev/blockdev.sh@785 -- # true 00:06:04.696 17:25:00 blockdev_general -- bdev/blockdev.sh@787 -- # [[ bdev == bdev ]] 00:06:04.696 17:25:00 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qos qos_test_suite '' 00:06:04.696 17:25:00 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:04.696 17:25:00 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.696 17:25:00 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:04.696 ************************************ 00:06:04.696 START TEST bdev_qos 00:06:04.696 ************************************ 00:06:04.696 17:25:00 blockdev_general.bdev_qos -- common/autotest_common.sh@1123 -- # qos_test_suite '' 00:06:04.696 17:25:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # QOS_PID=48142 00:06:04.696 Process qos testing pid: 48142 00:06:04.696 17:25:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # echo 'Process qos testing pid: 48142' 00:06:04.696 17:25:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:06:04.696 17:25:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:06:04.696 17:25:00 blockdev_general.bdev_qos -- bdev/blockdev.sh@449 -- # waitforlisten 48142 00:06:04.696 17:25:00 blockdev_general.bdev_qos -- common/autotest_common.sh@829 -- # '[' -z 48142 ']' 00:06:04.696 17:25:00 blockdev_general.bdev_qos -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.696 17:25:00 blockdev_general.bdev_qos -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.696 17:25:00 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.696 17:25:00 blockdev_general.bdev_qos -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.696 17:25:00 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:04.696 [2024-07-15 17:25:00.303070] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:06:04.696 [2024-07-15 17:25:00.303318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:05.261 EAL: TSC is not safe to use in SMP mode 00:06:05.261 EAL: TSC is not invariant 00:06:05.261 [2024-07-15 17:25:00.849706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.261 [2024-07-15 17:25:00.946920] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:05.261 [2024-07-15 17:25:00.949356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@862 -- # return 0 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:05.827 Malloc_0 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # waitforbdev Malloc_0 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_0 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:05.827 [ 00:06:05.827 { 00:06:05.827 "name": "Malloc_0", 00:06:05.827 "aliases": [ 00:06:05.827 "2ab3656e-42cf-11ef-96ac-773515fba644" 00:06:05.827 ], 00:06:05.827 "product_name": "Malloc disk", 00:06:05.827 "block_size": 512, 00:06:05.827 "num_blocks": 262144, 00:06:05.827 "uuid": "2ab3656e-42cf-11ef-96ac-773515fba644", 00:06:05.827 "assigned_rate_limits": { 00:06:05.827 "rw_ios_per_sec": 0, 00:06:05.827 "rw_mbytes_per_sec": 0, 00:06:05.827 "r_mbytes_per_sec": 0, 00:06:05.827 "w_mbytes_per_sec": 0 00:06:05.827 }, 00:06:05.827 "claimed": false, 00:06:05.827 "zoned": false, 00:06:05.827 "supported_io_types": { 00:06:05.827 "read": true, 00:06:05.827 "write": true, 00:06:05.827 "unmap": true, 00:06:05.827 "flush": true, 00:06:05.827 "reset": true, 00:06:05.827 "nvme_admin": false, 00:06:05.827 "nvme_io": false, 00:06:05.827 "nvme_io_md": false, 00:06:05.827 "write_zeroes": true, 00:06:05.827 "zcopy": true, 00:06:05.827 "get_zone_info": false, 00:06:05.827 "zone_management": false, 00:06:05.827 "zone_append": false, 00:06:05.827 "compare": false, 00:06:05.827 "compare_and_write": false, 00:06:05.827 "abort": true, 00:06:05.827 "seek_hole": false, 00:06:05.827 "seek_data": false, 00:06:05.827 "copy": true, 00:06:05.827 "nvme_iov_md": false 00:06:05.827 }, 00:06:05.827 "memory_domains": [ 00:06:05.827 { 00:06:05.827 "dma_device_id": "system", 00:06:05.827 "dma_device_type": 1 00:06:05.827 }, 00:06:05.827 { 00:06:05.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:05.827 "dma_device_type": 2 00:06:05.827 } 00:06:05.827 ], 00:06:05.827 "driver_specific": {} 00:06:05.827 } 00:06:05.827 ] 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # rpc_cmd bdev_null_create Null_1 128 512 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:05.827 Null_1 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@454 -- # waitforbdev Null_1 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Null_1 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:05.827 17:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:05.827 [ 00:06:05.827 { 00:06:05.827 "name": "Null_1", 00:06:05.827 "aliases": [ 00:06:05.827 "2ab846c9-42cf-11ef-96ac-773515fba644" 00:06:05.827 ], 00:06:05.827 "product_name": "Null disk", 00:06:05.827 "block_size": 512, 00:06:05.827 "num_blocks": 262144, 00:06:05.828 "uuid": "2ab846c9-42cf-11ef-96ac-773515fba644", 00:06:05.828 "assigned_rate_limits": { 00:06:05.828 "rw_ios_per_sec": 0, 00:06:05.828 "rw_mbytes_per_sec": 0, 00:06:05.828 "r_mbytes_per_sec": 0, 00:06:05.828 "w_mbytes_per_sec": 0 00:06:05.828 }, 00:06:05.828 "claimed": false, 00:06:05.828 "zoned": false, 00:06:05.828 "supported_io_types": { 00:06:05.828 "read": true, 00:06:05.828 "write": true, 00:06:05.828 "unmap": false, 00:06:05.828 "flush": false, 00:06:05.828 "reset": true, 00:06:05.828 "nvme_admin": false, 00:06:05.828 "nvme_io": false, 00:06:05.828 "nvme_io_md": false, 00:06:05.828 "write_zeroes": true, 00:06:05.828 "zcopy": false, 00:06:05.828 "get_zone_info": false, 00:06:05.828 "zone_management": false, 00:06:05.828 "zone_append": false, 00:06:05.828 "compare": false, 00:06:05.828 "compare_and_write": false, 00:06:05.828 "abort": true, 00:06:05.828 "seek_hole": false, 00:06:05.828 "seek_data": false, 00:06:05.828 "copy": false, 00:06:05.828 "nvme_iov_md": false 00:06:05.828 }, 00:06:05.828 "driver_specific": {} 00:06:05.828 } 00:06:05.828 ] 00:06:05.828 17:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:05.828 17:25:01 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:06:05.828 17:25:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@457 -- # qos_function_test 00:06:05.828 17:25:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_iops_limit=1000 00:06:05.828 17:25:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:05.828 17:25:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local qos_lower_bw_limit=2 00:06:05.828 17:25:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local io_result=0 00:06:05.828 17:25:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local iops_limit=0 00:06:05.828 17:25:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@414 -- # local bw_limit=0 00:06:05.828 17:25:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # get_io_result IOPS Malloc_0 00:06:05.828 17:25:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:06:05.828 17:25:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:06:05.828 17:25:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:06:05.828 17:25:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:05.828 17:25:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:06:05.828 17:25:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:06:05.828 Running I/O for 60 seconds... 00:06:12.383 17:25:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 628077.23 2512308.92 0.00 0.00 2692096.00 0.00 0.00 ' 00:06:12.383 17:25:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:06:12.383 17:25:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:06:12.383 17:25:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # iostat_result=628077.23 00:06:12.383 17:25:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 628077 00:06:12.383 17:25:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # io_result=628077 00:06:12.383 17:25:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # iops_limit=157000 00:06:12.383 17:25:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@419 -- # '[' 157000 -gt 1000 ']' 00:06:12.383 17:25:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 157000 Malloc_0 00:06:12.383 17:25:06 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:12.383 17:25:06 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:12.383 17:25:06 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:12.383 17:25:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@423 -- # run_test bdev_qos_iops run_qos_test 157000 IOPS Malloc_0 00:06:12.383 17:25:06 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:06:12.383 17:25:06 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.383 17:25:06 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:12.383 ************************************ 00:06:12.383 START TEST bdev_qos_iops 00:06:12.383 ************************************ 00:06:12.383 17:25:06 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1123 -- # run_qos_test 157000 IOPS Malloc_0 00:06:12.383 17:25:06 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_limit=157000 00:06:12.383 17:25:06 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@390 -- # local qos_result=0 00:06:12.383 17:25:06 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # get_io_result IOPS Malloc_0 00:06:12.383 17:25:06 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:06:12.383 17:25:06 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:06:12.383 17:25:06 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # local iostat_result 00:06:12.383 17:25:06 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:12.383 17:25:06 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:06:12.383 17:25:06 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # tail -1 00:06:17.645 17:25:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 156999.46 627997.85 0.00 0.00 660424.00 0.00 0.00 ' 00:06:17.645 17:25:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:06:17.645 17:25:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:06:17.645 17:25:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # iostat_result=156999.46 00:06:17.645 17:25:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@385 -- # echo 156999 00:06:17.645 17:25:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # qos_result=156999 00:06:17.645 17:25:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@393 -- # '[' IOPS = BANDWIDTH ']' 00:06:17.645 17:25:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # lower_limit=141300 00:06:17.645 17:25:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@397 -- # upper_limit=172700 00:06:17.645 17:25:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 156999 -lt 141300 ']' 00:06:17.645 17:25:12 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 156999 -gt 172700 ']' 00:06:17.645 00:06:17.645 real 0m5.472s 00:06:17.645 user 0m0.087s 00:06:17.645 sys 0m0.053s 00:06:17.645 17:25:12 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.645 17:25:12 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:06:17.645 ************************************ 00:06:17.645 END TEST bdev_qos_iops 00:06:17.645 ************************************ 00:06:17.645 17:25:12 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:06:17.645 17:25:12 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # get_io_result BANDWIDTH Null_1 00:06:17.645 17:25:12 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:06:17.645 17:25:12 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:06:17.645 17:25:12 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:06:17.645 17:25:12 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:17.645 17:25:12 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Null_1 00:06:17.645 17:25:12 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:06:22.922 17:25:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 368657.49 1474629.95 0.00 0.00 1591296.00 0.00 0.00 ' 00:06:22.923 17:25:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:06:22.923 17:25:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:22.923 17:25:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:06:22.923 17:25:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # iostat_result=1591296.00 00:06:22.923 17:25:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 1591296 00:06:22.923 17:25:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=1591296 00:06:22.923 17:25:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # bw_limit=155 00:06:22.923 17:25:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@429 -- # '[' 155 -lt 2 ']' 00:06:22.923 17:25:17 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 155 Null_1 00:06:22.923 17:25:17 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.923 17:25:17 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:22.923 17:25:18 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.923 17:25:18 blockdev_general.bdev_qos -- bdev/blockdev.sh@433 -- # run_test bdev_qos_bw run_qos_test 155 BANDWIDTH Null_1 00:06:22.923 17:25:18 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:06:22.923 17:25:18 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.923 17:25:18 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:22.923 ************************************ 00:06:22.923 START TEST bdev_qos_bw 00:06:22.923 ************************************ 00:06:22.923 17:25:18 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1123 -- # run_qos_test 155 BANDWIDTH Null_1 00:06:22.923 17:25:18 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_limit=155 00:06:22.923 17:25:18 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:06:22.923 17:25:18 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Null_1 00:06:22.923 17:25:18 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:06:22.923 17:25:18 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:06:22.923 17:25:18 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:06:22.923 17:25:18 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:22.923 17:25:18 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # grep Null_1 00:06:22.923 17:25:18 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # tail -1 00:06:28.187 17:25:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 39687.22 158748.89 0.00 0.00 169992.00 0.00 0.00 ' 00:06:28.187 17:25:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:06:28.187 17:25:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:28.187 17:25:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:06:28.187 17:25:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # iostat_result=169992.00 00:06:28.187 17:25:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@385 -- # echo 169992 00:06:28.187 17:25:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # qos_result=169992 00:06:28.187 17:25:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:28.187 17:25:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@394 -- # qos_limit=158720 00:06:28.187 17:25:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # lower_limit=142848 00:06:28.187 17:25:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@397 -- # upper_limit=174592 00:06:28.187 17:25:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 169992 -lt 142848 ']' 00:06:28.187 17:25:23 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 169992 -gt 174592 ']' 00:06:28.187 00:06:28.187 real 0m5.471s 00:06:28.187 user 0m0.118s 00:06:28.187 sys 0m0.040s 00:06:28.187 17:25:23 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.187 17:25:23 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:06:28.187 ************************************ 00:06:28.187 END TEST bdev_qos_bw 00:06:28.187 ************************************ 00:06:28.187 17:25:23 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:06:28.187 17:25:23 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:06:28.187 17:25:23 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:28.187 17:25:23 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:28.187 17:25:23 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:28.187 17:25:23 blockdev_general.bdev_qos -- bdev/blockdev.sh@437 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:06:28.187 17:25:23 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:06:28.187 17:25:23 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.187 17:25:23 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:28.187 ************************************ 00:06:28.187 START TEST bdev_qos_ro_bw 00:06:28.187 ************************************ 00:06:28.187 17:25:23 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1123 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:06:28.187 17:25:23 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_limit=2 00:06:28.187 17:25:23 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:06:28.187 17:25:23 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Malloc_0 00:06:28.187 17:25:23 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:06:28.187 17:25:23 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:06:28.187 17:25:23 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:06:28.187 17:25:23 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:28.187 17:25:23 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:06:28.187 17:25:23 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # tail -1 00:06:33.444 17:25:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 512.87 2051.48 0.00 0.00 2200.00 0.00 0.00 ' 00:06:33.444 17:25:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:06:33.444 17:25:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:33.444 17:25:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:06:33.444 17:25:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # iostat_result=2200.00 00:06:33.444 17:25:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@385 -- # echo 2200 00:06:33.444 17:25:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # qos_result=2200 00:06:33.444 17:25:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:33.444 17:25:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@394 -- # qos_limit=2048 00:06:33.444 17:25:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # lower_limit=1843 00:06:33.444 17:25:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@397 -- # upper_limit=2252 00:06:33.444 17:25:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2200 -lt 1843 ']' 00:06:33.444 17:25:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2200 -gt 2252 ']' 00:06:33.444 00:06:33.444 real 0m5.450s 00:06:33.444 user 0m0.118s 00:06:33.444 sys 0m0.039s 00:06:33.444 17:25:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.444 ************************************ 00:06:33.444 END TEST bdev_qos_ro_bw 00:06:33.444 ************************************ 00:06:33.444 17:25:28 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:06:33.444 17:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:06:33.444 17:25:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:06:33.444 17:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.444 17:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:33.725 17:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.725 17:25:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # rpc_cmd bdev_null_delete Null_1 00:06:33.725 17:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.725 17:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:33.725 00:06:33.725 Latency(us) 00:06:33.725 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:33.725 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:06:33.725 Malloc_0 : 27.95 210559.60 822.50 0.00 0.00 1205.09 348.16 503317.20 00:06:33.725 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:06:33.725 Null_1 : 27.99 261687.09 1022.22 0.00 0.00 977.85 71.21 30265.76 00:06:33.725 =================================================================================================================== 00:06:33.725 Total : 472246.69 1844.71 0.00 0.00 1079.10 71.21 503317.20 00:06:33.984 0 00:06:33.984 17:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.984 17:25:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # killprocess 48142 00:06:33.984 17:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@948 -- # '[' -z 48142 ']' 00:06:33.984 17:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@952 -- # kill -0 48142 00:06:33.984 17:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # uname 00:06:33.984 17:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:06:33.984 17:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # ps -c -o command 48142 00:06:33.984 17:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # tail -1 00:06:33.984 17:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:06:33.984 17:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:06:33.984 killing process with pid 48142 00:06:33.984 17:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48142' 00:06:33.984 Received shutdown signal, test time was about 27.997874 seconds 00:06:33.984 00:06:33.984 Latency(us) 00:06:33.984 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:33.984 =================================================================================================================== 00:06:33.984 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:33.984 17:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@967 -- # kill 48142 00:06:33.984 17:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@972 -- # wait 48142 00:06:33.984 17:25:29 blockdev_general.bdev_qos -- bdev/blockdev.sh@462 -- # trap - SIGINT SIGTERM EXIT 00:06:33.984 00:06:33.984 real 0m29.447s 00:06:33.984 user 0m30.102s 00:06:33.984 sys 0m0.918s 00:06:33.984 ************************************ 00:06:33.984 END TEST bdev_qos 00:06:33.984 ************************************ 00:06:33.984 17:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.984 17:25:29 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:33.984 17:25:29 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:06:33.984 17:25:29 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:06:33.984 17:25:29 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:33.984 17:25:29 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.984 17:25:29 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:33.984 ************************************ 00:06:33.984 START TEST bdev_qd_sampling 00:06:33.984 ************************************ 00:06:33.984 17:25:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1123 -- # qd_sampling_test_suite '' 00:06:33.984 17:25:29 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@538 -- # QD_DEV=Malloc_QD 00:06:33.984 17:25:29 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # QD_PID=48363 00:06:33.984 17:25:29 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:06:33.984 Process bdev QD sampling period testing pid: 48363 00:06:33.984 17:25:29 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # echo 'Process bdev QD sampling period testing pid: 48363' 00:06:33.984 17:25:29 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:06:33.984 17:25:29 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@544 -- # waitforlisten 48363 00:06:33.984 17:25:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@829 -- # '[' -z 48363 ']' 00:06:33.984 17:25:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.984 17:25:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.984 17:25:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.984 17:25:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.984 17:25:29 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:33.984 [2024-07-15 17:25:29.789160] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:06:33.984 [2024-07-15 17:25:29.789405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:34.551 EAL: TSC is not safe to use in SMP mode 00:06:34.551 EAL: TSC is not invariant 00:06:34.551 [2024-07-15 17:25:30.363550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.808 [2024-07-15 17:25:30.472825] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:34.808 [2024-07-15 17:25:30.472904] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:34.808 [2024-07-15 17:25:30.476121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.808 [2024-07-15 17:25:30.476107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.098 17:25:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.098 17:25:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@862 -- # return 0 00:06:35.098 17:25:30 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:06:35.098 17:25:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.098 17:25:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:35.098 Malloc_QD 00:06:35.098 17:25:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.098 17:25:30 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@547 -- # waitforbdev Malloc_QD 00:06:35.098 17:25:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_QD 00:06:35.098 17:25:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:06:35.098 17:25:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@899 -- # local i 00:06:35.098 17:25:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:06:35.098 17:25:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:06:35.098 17:25:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:06:35.098 17:25:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.098 17:25:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:35.098 17:25:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.098 17:25:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:06:35.098 17:25:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.098 17:25:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:35.098 [ 00:06:35.098 { 00:06:35.098 "name": "Malloc_QD", 00:06:35.098 "aliases": [ 00:06:35.098 "3c44928c-42cf-11ef-96ac-773515fba644" 00:06:35.098 ], 00:06:35.098 "product_name": "Malloc disk", 00:06:35.098 "block_size": 512, 00:06:35.098 "num_blocks": 262144, 00:06:35.098 "uuid": "3c44928c-42cf-11ef-96ac-773515fba644", 00:06:35.098 "assigned_rate_limits": { 00:06:35.098 "rw_ios_per_sec": 0, 00:06:35.098 "rw_mbytes_per_sec": 0, 00:06:35.098 "r_mbytes_per_sec": 0, 00:06:35.098 "w_mbytes_per_sec": 0 00:06:35.098 }, 00:06:35.098 "claimed": false, 00:06:35.098 "zoned": false, 00:06:35.098 "supported_io_types": { 00:06:35.098 "read": true, 00:06:35.098 "write": true, 00:06:35.098 "unmap": true, 00:06:35.098 "flush": true, 00:06:35.098 "reset": true, 00:06:35.098 "nvme_admin": false, 00:06:35.098 "nvme_io": false, 00:06:35.098 "nvme_io_md": false, 00:06:35.098 "write_zeroes": true, 00:06:35.098 "zcopy": true, 00:06:35.098 "get_zone_info": false, 00:06:35.098 "zone_management": false, 00:06:35.098 "zone_append": false, 00:06:35.098 "compare": false, 00:06:35.098 "compare_and_write": false, 00:06:35.098 "abort": true, 00:06:35.098 "seek_hole": false, 00:06:35.098 "seek_data": false, 00:06:35.098 "copy": true, 00:06:35.098 "nvme_iov_md": false 00:06:35.098 }, 00:06:35.098 "memory_domains": [ 00:06:35.098 { 00:06:35.098 "dma_device_id": "system", 00:06:35.098 "dma_device_type": 1 00:06:35.098 }, 00:06:35.098 { 00:06:35.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:35.098 "dma_device_type": 2 00:06:35.098 } 00:06:35.098 ], 00:06:35.098 "driver_specific": {} 00:06:35.098 } 00:06:35.098 ] 00:06:35.098 17:25:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.098 17:25:30 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@905 -- # return 0 00:06:35.098 17:25:30 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # sleep 2 00:06:35.098 17:25:30 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:35.357 Running I/O for 5 seconds... 00:06:37.258 17:25:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@551 -- # qd_sampling_function_test Malloc_QD 00:06:37.258 17:25:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local bdev_name=Malloc_QD 00:06:37.258 17:25:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local sampling_period=10 00:06:37.258 17:25:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@521 -- # local iostats 00:06:37.258 17:25:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:06:37.258 17:25:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.258 17:25:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:37.258 17:25:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.258 17:25:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:06:37.258 17:25:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.258 17:25:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:37.258 17:25:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.258 17:25:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # iostats='{ 00:06:37.258 "tick_rate": 2199996845, 00:06:37.258 "ticks": 742346178791, 00:06:37.258 "bdevs": [ 00:06:37.258 { 00:06:37.258 "name": "Malloc_QD", 00:06:37.258 "bytes_read": 12373234176, 00:06:37.258 "num_read_ops": 3020803, 00:06:37.258 "bytes_written": 0, 00:06:37.258 "num_write_ops": 0, 00:06:37.258 "bytes_unmapped": 0, 00:06:37.258 "num_unmap_ops": 0, 00:06:37.258 "bytes_copied": 0, 00:06:37.258 "num_copy_ops": 0, 00:06:37.258 "read_latency_ticks": 2199104079470, 00:06:37.258 "max_read_latency_ticks": 1307758, 00:06:37.258 "min_read_latency_ticks": 66264, 00:06:37.258 "write_latency_ticks": 0, 00:06:37.258 "max_write_latency_ticks": 0, 00:06:37.258 "min_write_latency_ticks": 0, 00:06:37.258 "unmap_latency_ticks": 0, 00:06:37.258 "max_unmap_latency_ticks": 0, 00:06:37.258 "min_unmap_latency_ticks": 0, 00:06:37.258 "copy_latency_ticks": 0, 00:06:37.258 "max_copy_latency_ticks": 0, 00:06:37.258 "min_copy_latency_ticks": 0, 00:06:37.258 "io_error": {}, 00:06:37.258 "queue_depth_polling_period": 10, 00:06:37.258 "queue_depth": 512, 00:06:37.258 "io_time": 360, 00:06:37.258 "weighted_io_time": 184320 00:06:37.258 } 00:06:37.258 ] 00:06:37.258 }' 00:06:37.258 17:25:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:06:37.258 17:25:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # qd_sampling_period=10 00:06:37.258 17:25:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 == null ']' 00:06:37.258 17:25:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 -ne 10 ']' 00:06:37.258 17:25:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:06:37.258 17:25:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.258 17:25:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:37.258 00:06:37.258 Latency(us) 00:06:37.258 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:37.258 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:06:37.258 Malloc_QD : 1.98 763016.53 2980.53 0.00 0.00 335.24 59.35 456.15 00:06:37.258 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:06:37.258 Malloc_QD : 1.98 782702.18 3057.43 0.00 0.00 326.80 53.53 595.78 00:06:37.258 =================================================================================================================== 00:06:37.258 Total : 1545718.71 6037.96 0.00 0.00 330.96 53.53 595.78 00:06:37.258 0 00:06:37.258 17:25:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.258 17:25:32 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # killprocess 48363 00:06:37.258 17:25:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@948 -- # '[' -z 48363 ']' 00:06:37.258 17:25:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@952 -- # kill -0 48363 00:06:37.258 17:25:32 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # uname 00:06:37.258 17:25:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:06:37.258 17:25:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # ps -c -o command 48363 00:06:37.258 17:25:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # tail -1 00:06:37.258 17:25:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:06:37.258 17:25:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:06:37.258 killing process with pid 48363 00:06:37.258 17:25:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48363' 00:06:37.258 17:25:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@967 -- # kill 48363 00:06:37.258 Received shutdown signal, test time was about 2.015054 seconds 00:06:37.258 00:06:37.258 Latency(us) 00:06:37.258 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:37.258 =================================================================================================================== 00:06:37.258 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:37.258 17:25:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@972 -- # wait 48363 00:06:37.517 17:25:33 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@555 -- # trap - SIGINT SIGTERM EXIT 00:06:37.517 00:06:37.517 real 0m3.410s 00:06:37.517 user 0m6.027s 00:06:37.517 sys 0m0.705s 00:06:37.517 17:25:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.517 17:25:33 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:37.517 ************************************ 00:06:37.517 END TEST bdev_qd_sampling 00:06:37.517 ************************************ 00:06:37.517 17:25:33 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:06:37.517 17:25:33 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_error error_test_suite '' 00:06:37.517 17:25:33 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:37.517 17:25:33 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.517 17:25:33 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:37.517 ************************************ 00:06:37.517 START TEST bdev_error 00:06:37.517 ************************************ 00:06:37.517 17:25:33 blockdev_general.bdev_error -- common/autotest_common.sh@1123 -- # error_test_suite '' 00:06:37.517 17:25:33 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_1=Dev_1 00:06:37.517 17:25:33 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # DEV_2=Dev_2 00:06:37.517 17:25:33 blockdev_general.bdev_error -- bdev/blockdev.sh@468 -- # ERR_DEV=EE_Dev_1 00:06:37.517 17:25:33 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # ERR_PID=48410 00:06:37.517 Process error testing pid: 48410 00:06:37.517 17:25:33 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # echo 'Process error testing pid: 48410' 00:06:37.517 17:25:33 blockdev_general.bdev_error -- bdev/blockdev.sh@474 -- # waitforlisten 48410 00:06:37.517 17:25:33 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:06:37.517 17:25:33 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 48410 ']' 00:06:37.517 17:25:33 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.517 17:25:33 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.517 17:25:33 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.517 17:25:33 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.517 17:25:33 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:37.517 [2024-07-15 17:25:33.245547] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:06:37.517 [2024-07-15 17:25:33.245799] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:38.087 EAL: TSC is not safe to use in SMP mode 00:06:38.087 EAL: TSC is not invariant 00:06:38.087 [2024-07-15 17:25:33.786893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.087 [2024-07-15 17:25:33.873073] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:38.087 [2024-07-15 17:25:33.875216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.665 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:38.665 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:06:38.665 17:25:34 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:06:38.665 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.665 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:38.665 Dev_1 00:06:38.665 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.665 17:25:34 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # waitforbdev Dev_1 00:06:38.665 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:06:38.665 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:06:38.665 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:06:38.665 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:06:38.665 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:06:38.665 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:06:38.665 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.665 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:38.665 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.665 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:06:38.665 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.665 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:38.665 [ 00:06:38.665 { 00:06:38.665 "name": "Dev_1", 00:06:38.665 "aliases": [ 00:06:38.665 "3e59651d-42cf-11ef-96ac-773515fba644" 00:06:38.665 ], 00:06:38.665 "product_name": "Malloc disk", 00:06:38.665 "block_size": 512, 00:06:38.665 "num_blocks": 262144, 00:06:38.665 "uuid": "3e59651d-42cf-11ef-96ac-773515fba644", 00:06:38.665 "assigned_rate_limits": { 00:06:38.665 "rw_ios_per_sec": 0, 00:06:38.665 "rw_mbytes_per_sec": 0, 00:06:38.665 "r_mbytes_per_sec": 0, 00:06:38.665 "w_mbytes_per_sec": 0 00:06:38.665 }, 00:06:38.665 "claimed": false, 00:06:38.665 "zoned": false, 00:06:38.665 "supported_io_types": { 00:06:38.665 "read": true, 00:06:38.665 "write": true, 00:06:38.665 "unmap": true, 00:06:38.665 "flush": true, 00:06:38.665 "reset": true, 00:06:38.665 "nvme_admin": false, 00:06:38.665 "nvme_io": false, 00:06:38.665 "nvme_io_md": false, 00:06:38.665 "write_zeroes": true, 00:06:38.665 "zcopy": true, 00:06:38.665 "get_zone_info": false, 00:06:38.665 "zone_management": false, 00:06:38.665 "zone_append": false, 00:06:38.665 "compare": false, 00:06:38.665 "compare_and_write": false, 00:06:38.665 "abort": true, 00:06:38.665 "seek_hole": false, 00:06:38.665 "seek_data": false, 00:06:38.665 "copy": true, 00:06:38.665 "nvme_iov_md": false 00:06:38.665 }, 00:06:38.665 "memory_domains": [ 00:06:38.665 { 00:06:38.665 "dma_device_id": "system", 00:06:38.665 "dma_device_type": 1 00:06:38.665 }, 00:06:38.665 { 00:06:38.665 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:38.665 "dma_device_type": 2 00:06:38.665 } 00:06:38.665 ], 00:06:38.665 "driver_specific": {} 00:06:38.665 } 00:06:38.665 ] 00:06:38.665 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.665 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:06:38.665 17:25:34 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_error_create Dev_1 00:06:38.665 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.666 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:38.666 true 00:06:38.666 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.666 17:25:34 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:06:38.666 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.666 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:38.666 Dev_2 00:06:38.666 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.666 17:25:34 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # waitforbdev Dev_2 00:06:38.666 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:06:38.666 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:06:38.666 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:06:38.666 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:06:38.666 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:06:38.666 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:06:38.666 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.666 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:38.666 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.666 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:06:38.666 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.666 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:38.666 [ 00:06:38.666 { 00:06:38.666 "name": "Dev_2", 00:06:38.666 "aliases": [ 00:06:38.666 "3e5f7f0c-42cf-11ef-96ac-773515fba644" 00:06:38.666 ], 00:06:38.666 "product_name": "Malloc disk", 00:06:38.666 "block_size": 512, 00:06:38.666 "num_blocks": 262144, 00:06:38.666 "uuid": "3e5f7f0c-42cf-11ef-96ac-773515fba644", 00:06:38.666 "assigned_rate_limits": { 00:06:38.666 "rw_ios_per_sec": 0, 00:06:38.666 "rw_mbytes_per_sec": 0, 00:06:38.666 "r_mbytes_per_sec": 0, 00:06:38.666 "w_mbytes_per_sec": 0 00:06:38.666 }, 00:06:38.666 "claimed": false, 00:06:38.666 "zoned": false, 00:06:38.666 "supported_io_types": { 00:06:38.666 "read": true, 00:06:38.666 "write": true, 00:06:38.666 "unmap": true, 00:06:38.666 "flush": true, 00:06:38.666 "reset": true, 00:06:38.666 "nvme_admin": false, 00:06:38.666 "nvme_io": false, 00:06:38.666 "nvme_io_md": false, 00:06:38.666 "write_zeroes": true, 00:06:38.666 "zcopy": true, 00:06:38.666 "get_zone_info": false, 00:06:38.666 "zone_management": false, 00:06:38.666 "zone_append": false, 00:06:38.666 "compare": false, 00:06:38.666 "compare_and_write": false, 00:06:38.666 "abort": true, 00:06:38.666 "seek_hole": false, 00:06:38.666 "seek_data": false, 00:06:38.666 "copy": true, 00:06:38.666 "nvme_iov_md": false 00:06:38.666 }, 00:06:38.666 "memory_domains": [ 00:06:38.666 { 00:06:38.666 "dma_device_id": "system", 00:06:38.666 "dma_device_type": 1 00:06:38.666 }, 00:06:38.666 { 00:06:38.666 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:38.666 "dma_device_type": 2 00:06:38.666 } 00:06:38.666 ], 00:06:38.666 "driver_specific": {} 00:06:38.666 } 00:06:38.666 ] 00:06:38.666 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.666 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:06:38.666 17:25:34 blockdev_general.bdev_error -- bdev/blockdev.sh@481 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:06:38.666 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:38.666 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:38.666 17:25:34 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:38.666 17:25:34 blockdev_general.bdev_error -- bdev/blockdev.sh@484 -- # sleep 1 00:06:38.666 17:25:34 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:06:38.924 Running I/O for 5 seconds... 00:06:39.859 17:25:35 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # kill -0 48410 00:06:39.859 Process is existed as continue on error is set. Pid: 48410 00:06:39.859 17:25:35 blockdev_general.bdev_error -- bdev/blockdev.sh@488 -- # echo 'Process is existed as continue on error is set. Pid: 48410' 00:06:39.859 17:25:35 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:06:39.859 17:25:35 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.859 17:25:35 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:39.859 17:25:35 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.859 17:25:35 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # rpc_cmd bdev_malloc_delete Dev_1 00:06:39.859 17:25:35 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.859 17:25:35 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:39.859 17:25:35 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.859 17:25:35 blockdev_general.bdev_error -- bdev/blockdev.sh@497 -- # sleep 5 00:06:39.859 Timeout while waiting for response: 00:06:39.859 00:06:39.859 00:06:44.058 00:06:44.058 Latency(us) 00:06:44.058 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:44.058 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:06:44.058 EE_Dev_1 : 0.96 311116.94 1215.30 5.18 0.00 51.20 24.55 159.19 00:06:44.058 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:06:44.058 Dev_2 : 5.00 720227.21 2813.39 0.00 0.00 22.00 6.08 23473.84 00:06:44.058 =================================================================================================================== 00:06:44.058 Total : 1031344.15 4028.69 5.18 0.00 24.25 6.08 23473.84 00:06:44.991 17:25:40 blockdev_general.bdev_error -- bdev/blockdev.sh@499 -- # killprocess 48410 00:06:44.991 17:25:40 blockdev_general.bdev_error -- common/autotest_common.sh@948 -- # '[' -z 48410 ']' 00:06:44.991 17:25:40 blockdev_general.bdev_error -- common/autotest_common.sh@952 -- # kill -0 48410 00:06:44.991 17:25:40 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # uname 00:06:44.991 17:25:40 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:06:44.991 17:25:40 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # ps -c -o command 48410 00:06:44.991 17:25:40 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # tail -1 00:06:44.991 17:25:40 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:06:44.991 17:25:40 blockdev_general.bdev_error -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:06:44.991 killing process with pid 48410 00:06:44.991 Received shutdown signal, test time was about 5.000000 seconds 00:06:44.991 00:06:44.991 Latency(us) 00:06:44.991 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:44.991 =================================================================================================================== 00:06:44.991 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:44.991 17:25:40 blockdev_general.bdev_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48410' 00:06:44.991 17:25:40 blockdev_general.bdev_error -- common/autotest_common.sh@967 -- # kill 48410 00:06:44.991 17:25:40 blockdev_general.bdev_error -- common/autotest_common.sh@972 -- # wait 48410 00:06:45.249 17:25:40 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # ERR_PID=48450 00:06:45.249 Process error testing pid: 48450 00:06:45.249 17:25:40 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:06:45.249 17:25:40 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # echo 'Process error testing pid: 48450' 00:06:45.249 17:25:40 blockdev_general.bdev_error -- bdev/blockdev.sh@505 -- # waitforlisten 48450 00:06:45.249 17:25:40 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 48450 ']' 00:06:45.249 17:25:40 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.249 17:25:40 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.249 17:25:40 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.249 17:25:40 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.249 17:25:40 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:45.249 [2024-07-15 17:25:40.858909] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:06:45.250 [2024-07-15 17:25:40.859183] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:45.815 EAL: TSC is not safe to use in SMP mode 00:06:45.815 EAL: TSC is not invariant 00:06:45.815 [2024-07-15 17:25:41.402459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.815 [2024-07-15 17:25:41.499226] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:45.815 [2024-07-15 17:25:41.501910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.381 17:25:41 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.381 17:25:41 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:06:46.381 17:25:41 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:06:46.381 17:25:41 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.381 17:25:41 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:46.381 Dev_1 00:06:46.381 17:25:41 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.381 17:25:41 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # waitforbdev Dev_1 00:06:46.381 17:25:41 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:06:46.381 17:25:41 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:06:46.381 17:25:41 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:06:46.381 17:25:41 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:06:46.381 17:25:41 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:06:46.381 17:25:41 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:06:46.381 17:25:41 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.381 17:25:41 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:46.381 17:25:41 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.381 17:25:41 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:06:46.381 17:25:41 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.381 17:25:41 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:46.381 [ 00:06:46.381 { 00:06:46.381 "name": "Dev_1", 00:06:46.381 "aliases": [ 00:06:46.381 "42e5a5ee-42cf-11ef-96ac-773515fba644" 00:06:46.381 ], 00:06:46.381 "product_name": "Malloc disk", 00:06:46.381 "block_size": 512, 00:06:46.381 "num_blocks": 262144, 00:06:46.381 "uuid": "42e5a5ee-42cf-11ef-96ac-773515fba644", 00:06:46.381 "assigned_rate_limits": { 00:06:46.381 "rw_ios_per_sec": 0, 00:06:46.381 "rw_mbytes_per_sec": 0, 00:06:46.381 "r_mbytes_per_sec": 0, 00:06:46.381 "w_mbytes_per_sec": 0 00:06:46.381 }, 00:06:46.381 "claimed": false, 00:06:46.381 "zoned": false, 00:06:46.381 "supported_io_types": { 00:06:46.381 "read": true, 00:06:46.381 "write": true, 00:06:46.381 "unmap": true, 00:06:46.381 "flush": true, 00:06:46.381 "reset": true, 00:06:46.381 "nvme_admin": false, 00:06:46.381 "nvme_io": false, 00:06:46.381 "nvme_io_md": false, 00:06:46.381 "write_zeroes": true, 00:06:46.381 "zcopy": true, 00:06:46.381 "get_zone_info": false, 00:06:46.381 "zone_management": false, 00:06:46.381 "zone_append": false, 00:06:46.381 "compare": false, 00:06:46.381 "compare_and_write": false, 00:06:46.381 "abort": true, 00:06:46.381 "seek_hole": false, 00:06:46.381 "seek_data": false, 00:06:46.381 "copy": true, 00:06:46.381 "nvme_iov_md": false 00:06:46.381 }, 00:06:46.381 "memory_domains": [ 00:06:46.381 { 00:06:46.381 "dma_device_id": "system", 00:06:46.381 "dma_device_type": 1 00:06:46.381 }, 00:06:46.381 { 00:06:46.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.381 "dma_device_type": 2 00:06:46.381 } 00:06:46.381 ], 00:06:46.381 "driver_specific": {} 00:06:46.381 } 00:06:46.381 ] 00:06:46.381 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.381 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:06:46.381 17:25:42 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_error_create Dev_1 00:06:46.381 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.381 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:46.381 true 00:06:46.381 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.381 17:25:42 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:06:46.381 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.381 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:46.381 Dev_2 00:06:46.381 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.381 17:25:42 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # waitforbdev Dev_2 00:06:46.381 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:06:46.381 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:06:46.381 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:06:46.381 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:06:46.381 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:06:46.381 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:06:46.381 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.381 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:46.381 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.381 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:06:46.381 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.381 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:46.381 [ 00:06:46.381 { 00:06:46.381 "name": "Dev_2", 00:06:46.381 "aliases": [ 00:06:46.381 "42ebbfcc-42cf-11ef-96ac-773515fba644" 00:06:46.381 ], 00:06:46.381 "product_name": "Malloc disk", 00:06:46.381 "block_size": 512, 00:06:46.381 "num_blocks": 262144, 00:06:46.381 "uuid": "42ebbfcc-42cf-11ef-96ac-773515fba644", 00:06:46.381 "assigned_rate_limits": { 00:06:46.381 "rw_ios_per_sec": 0, 00:06:46.381 "rw_mbytes_per_sec": 0, 00:06:46.381 "r_mbytes_per_sec": 0, 00:06:46.381 "w_mbytes_per_sec": 0 00:06:46.381 }, 00:06:46.381 "claimed": false, 00:06:46.381 "zoned": false, 00:06:46.381 "supported_io_types": { 00:06:46.381 "read": true, 00:06:46.381 "write": true, 00:06:46.381 "unmap": true, 00:06:46.381 "flush": true, 00:06:46.381 "reset": true, 00:06:46.381 "nvme_admin": false, 00:06:46.381 "nvme_io": false, 00:06:46.381 "nvme_io_md": false, 00:06:46.381 "write_zeroes": true, 00:06:46.381 "zcopy": true, 00:06:46.381 "get_zone_info": false, 00:06:46.381 "zone_management": false, 00:06:46.381 "zone_append": false, 00:06:46.381 "compare": false, 00:06:46.381 "compare_and_write": false, 00:06:46.381 "abort": true, 00:06:46.381 "seek_hole": false, 00:06:46.381 "seek_data": false, 00:06:46.381 "copy": true, 00:06:46.381 "nvme_iov_md": false 00:06:46.381 }, 00:06:46.381 "memory_domains": [ 00:06:46.381 { 00:06:46.381 "dma_device_id": "system", 00:06:46.381 "dma_device_type": 1 00:06:46.381 }, 00:06:46.381 { 00:06:46.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:46.381 "dma_device_type": 2 00:06:46.381 } 00:06:46.381 ], 00:06:46.381 "driver_specific": {} 00:06:46.381 } 00:06:46.381 ] 00:06:46.381 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.381 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:06:46.381 17:25:42 blockdev_general.bdev_error -- bdev/blockdev.sh@512 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:06:46.381 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.381 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:46.381 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.381 17:25:42 blockdev_general.bdev_error -- bdev/blockdev.sh@515 -- # NOT wait 48450 00:06:46.381 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@648 -- # local es=0 00:06:46.381 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # valid_exec_arg wait 48450 00:06:46.381 17:25:42 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:06:46.381 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@636 -- # local arg=wait 00:06:46.381 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.381 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # type -t wait 00:06:46.381 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:46.381 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # wait 48450 00:06:46.381 Running I/O for 5 seconds... 00:06:46.381 task offset: 7680 on job bdev=EE_Dev_1 fails 00:06:46.381 00:06:46.382 Latency(us) 00:06:46.382 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:46.382 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:06:46.382 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:06:46.382 EE_Dev_1 : 0.00 145695.36 569.12 33112.58 0.00 67.19 23.04 140.57 00:06:46.382 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:06:46.382 Dev_2 : 0.00 180790.96 706.21 0.00 0.00 39.47 25.37 61.91 00:06:46.382 =================================================================================================================== 00:06:46.382 Total : 326486.32 1275.34 33112.58 0.00 52.16 23.04 140.57 00:06:46.382 [2024-07-15 17:25:42.176631] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:46.382 request: 00:06:46.382 { 00:06:46.382 "method": "perform_tests", 00:06:46.382 "req_id": 1 00:06:46.382 } 00:06:46.382 Got JSON-RPC error response 00:06:46.382 response: 00:06:46.382 { 00:06:46.382 "code": -32603, 00:06:46.382 "message": "bdevperf failed with error Operation not permitted" 00:06:46.382 } 00:06:46.684 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # es=255 00:06:46.684 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:46.684 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@660 -- # es=127 00:06:46.684 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # case "$es" in 00:06:46.684 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@668 -- # es=1 00:06:46.685 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:46.685 00:06:46.685 real 0m9.160s 00:06:46.685 user 0m9.269s 00:06:46.685 sys 0m1.351s 00:06:46.685 ************************************ 00:06:46.685 END TEST bdev_error 00:06:46.685 ************************************ 00:06:46.685 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.685 17:25:42 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:46.685 17:25:42 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:06:46.685 17:25:42 blockdev_general -- bdev/blockdev.sh@791 -- # run_test bdev_stat stat_test_suite '' 00:06:46.685 17:25:42 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:46.685 17:25:42 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.685 17:25:42 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:46.685 ************************************ 00:06:46.685 START TEST bdev_stat 00:06:46.685 ************************************ 00:06:46.685 17:25:42 blockdev_general.bdev_stat -- common/autotest_common.sh@1123 -- # stat_test_suite '' 00:06:46.685 17:25:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@592 -- # STAT_DEV=Malloc_STAT 00:06:46.685 17:25:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # STAT_PID=48477 00:06:46.685 Process Bdev IO statistics testing pid: 48477 00:06:46.685 17:25:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # echo 'Process Bdev IO statistics testing pid: 48477' 00:06:46.685 17:25:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:06:46.685 17:25:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@599 -- # waitforlisten 48477 00:06:46.685 17:25:42 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:06:46.685 17:25:42 blockdev_general.bdev_stat -- common/autotest_common.sh@829 -- # '[' -z 48477 ']' 00:06:46.685 17:25:42 blockdev_general.bdev_stat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.685 17:25:42 blockdev_general.bdev_stat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:46.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.685 17:25:42 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.685 17:25:42 blockdev_general.bdev_stat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:46.685 17:25:42 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:06:46.685 [2024-07-15 17:25:42.453332] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:06:46.685 [2024-07-15 17:25:42.453502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:47.249 EAL: TSC is not safe to use in SMP mode 00:06:47.249 EAL: TSC is not invariant 00:06:47.249 [2024-07-15 17:25:43.002072] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:47.506 [2024-07-15 17:25:43.090722] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:47.506 [2024-07-15 17:25:43.090811] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:47.506 [2024-07-15 17:25:43.093590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.506 [2024-07-15 17:25:43.093582] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.763 17:25:43 blockdev_general.bdev_stat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.763 17:25:43 blockdev_general.bdev_stat -- common/autotest_common.sh@862 -- # return 0 00:06:47.763 17:25:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:06:47.763 17:25:43 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.763 17:25:43 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:06:47.763 Malloc_STAT 00:06:47.763 17:25:43 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.763 17:25:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@602 -- # waitforbdev Malloc_STAT 00:06:47.763 17:25:43 blockdev_general.bdev_stat -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_STAT 00:06:47.763 17:25:43 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:06:47.763 17:25:43 blockdev_general.bdev_stat -- common/autotest_common.sh@899 -- # local i 00:06:47.763 17:25:43 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:06:47.764 17:25:43 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:06:47.764 17:25:43 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:06:47.764 17:25:43 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.764 17:25:43 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:06:47.764 17:25:43 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.764 17:25:43 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:06:47.764 17:25:43 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:47.764 17:25:43 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:06:47.764 [ 00:06:47.764 { 00:06:47.764 "name": "Malloc_STAT", 00:06:47.764 "aliases": [ 00:06:47.764 "43c63b61-42cf-11ef-96ac-773515fba644" 00:06:47.764 ], 00:06:47.764 "product_name": "Malloc disk", 00:06:47.764 "block_size": 512, 00:06:47.764 "num_blocks": 262144, 00:06:47.764 "uuid": "43c63b61-42cf-11ef-96ac-773515fba644", 00:06:47.764 "assigned_rate_limits": { 00:06:47.764 "rw_ios_per_sec": 0, 00:06:47.764 "rw_mbytes_per_sec": 0, 00:06:47.764 "r_mbytes_per_sec": 0, 00:06:47.764 "w_mbytes_per_sec": 0 00:06:47.764 }, 00:06:47.764 "claimed": false, 00:06:47.764 "zoned": false, 00:06:47.764 "supported_io_types": { 00:06:47.764 "read": true, 00:06:47.764 "write": true, 00:06:47.764 "unmap": true, 00:06:47.764 "flush": true, 00:06:47.764 "reset": true, 00:06:47.764 "nvme_admin": false, 00:06:47.764 "nvme_io": false, 00:06:47.764 "nvme_io_md": false, 00:06:47.764 "write_zeroes": true, 00:06:47.764 "zcopy": true, 00:06:47.764 "get_zone_info": false, 00:06:47.764 "zone_management": false, 00:06:47.764 "zone_append": false, 00:06:47.764 "compare": false, 00:06:47.764 "compare_and_write": false, 00:06:47.764 "abort": true, 00:06:47.764 "seek_hole": false, 00:06:47.764 "seek_data": false, 00:06:47.764 "copy": true, 00:06:47.764 "nvme_iov_md": false 00:06:47.764 }, 00:06:47.764 "memory_domains": [ 00:06:47.764 { 00:06:47.764 "dma_device_id": "system", 00:06:47.764 "dma_device_type": 1 00:06:47.764 }, 00:06:47.764 { 00:06:47.764 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:47.764 "dma_device_type": 2 00:06:47.764 } 00:06:47.764 ], 00:06:47.764 "driver_specific": {} 00:06:47.764 } 00:06:47.764 ] 00:06:47.764 17:25:43 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:47.764 17:25:43 blockdev_general.bdev_stat -- common/autotest_common.sh@905 -- # return 0 00:06:47.764 17:25:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # sleep 2 00:06:47.764 17:25:43 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:48.022 Running I/O for 10 seconds... 00:06:49.920 17:25:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@606 -- # stat_function_test Malloc_STAT 00:06:49.920 17:25:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local bdev_name=Malloc_STAT 00:06:49.920 17:25:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local iostats 00:06:49.920 17:25:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count1 00:06:49.920 17:25:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local io_count2 00:06:49.920 17:25:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local iostats_per_channel 00:06:49.920 17:25:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel1 00:06:49.920 17:25:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel2 00:06:49.920 17:25:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@566 -- # local io_count_per_channel_all=0 00:06:49.920 17:25:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:06:49.920 17:25:45 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # iostats='{ 00:06:49.921 "tick_rate": 2199996845, 00:06:49.921 "ticks": 770216605542, 00:06:49.921 "bdevs": [ 00:06:49.921 { 00:06:49.921 "name": "Malloc_STAT", 00:06:49.921 "bytes_read": 12520034816, 00:06:49.921 "num_read_ops": 3056643, 00:06:49.921 "bytes_written": 0, 00:06:49.921 "num_write_ops": 0, 00:06:49.921 "bytes_unmapped": 0, 00:06:49.921 "num_unmap_ops": 0, 00:06:49.921 "bytes_copied": 0, 00:06:49.921 "num_copy_ops": 0, 00:06:49.921 "read_latency_ticks": 2264846993100, 00:06:49.921 "max_read_latency_ticks": 1346605, 00:06:49.921 "min_read_latency_ticks": 42881, 00:06:49.921 "write_latency_ticks": 0, 00:06:49.921 "max_write_latency_ticks": 0, 00:06:49.921 "min_write_latency_ticks": 0, 00:06:49.921 "unmap_latency_ticks": 0, 00:06:49.921 "max_unmap_latency_ticks": 0, 00:06:49.921 "min_unmap_latency_ticks": 0, 00:06:49.921 "copy_latency_ticks": 0, 00:06:49.921 "max_copy_latency_ticks": 0, 00:06:49.921 "min_copy_latency_ticks": 0, 00:06:49.921 "io_error": {} 00:06:49.921 } 00:06:49.921 ] 00:06:49.921 }' 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # jq -r '.bdevs[0].num_read_ops' 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # io_count1=3056643 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # iostats_per_channel='{ 00:06:49.921 "tick_rate": 2199996845, 00:06:49.921 "ticks": 770271630194, 00:06:49.921 "name": "Malloc_STAT", 00:06:49.921 "channels": [ 00:06:49.921 { 00:06:49.921 "thread_id": 2, 00:06:49.921 "bytes_read": 6232735744, 00:06:49.921 "num_read_ops": 1521664, 00:06:49.921 "bytes_written": 0, 00:06:49.921 "num_write_ops": 0, 00:06:49.921 "bytes_unmapped": 0, 00:06:49.921 "num_unmap_ops": 0, 00:06:49.921 "bytes_copied": 0, 00:06:49.921 "num_copy_ops": 0, 00:06:49.921 "read_latency_ticks": 1146422489383, 00:06:49.921 "max_read_latency_ticks": 1346605, 00:06:49.921 "min_read_latency_ticks": 634194, 00:06:49.921 "write_latency_ticks": 0, 00:06:49.921 "max_write_latency_ticks": 0, 00:06:49.921 "min_write_latency_ticks": 0, 00:06:49.921 "unmap_latency_ticks": 0, 00:06:49.921 "max_unmap_latency_ticks": 0, 00:06:49.921 "min_unmap_latency_ticks": 0, 00:06:49.921 "copy_latency_ticks": 0, 00:06:49.921 "max_copy_latency_ticks": 0, 00:06:49.921 "min_copy_latency_ticks": 0 00:06:49.921 }, 00:06:49.921 { 00:06:49.921 "thread_id": 3, 00:06:49.921 "bytes_read": 6441402368, 00:06:49.921 "num_read_ops": 1572608, 00:06:49.921 "bytes_written": 0, 00:06:49.921 "num_write_ops": 0, 00:06:49.921 "bytes_unmapped": 0, 00:06:49.921 "num_unmap_ops": 0, 00:06:49.921 "bytes_copied": 0, 00:06:49.921 "num_copy_ops": 0, 00:06:49.921 "read_latency_ticks": 1146451388202, 00:06:49.921 "max_read_latency_ticks": 1040358, 00:06:49.921 "min_read_latency_ticks": 614012, 00:06:49.921 "write_latency_ticks": 0, 00:06:49.921 "max_write_latency_ticks": 0, 00:06:49.921 "min_write_latency_ticks": 0, 00:06:49.921 "unmap_latency_ticks": 0, 00:06:49.921 "max_unmap_latency_ticks": 0, 00:06:49.921 "min_unmap_latency_ticks": 0, 00:06:49.921 "copy_latency_ticks": 0, 00:06:49.921 "max_copy_latency_ticks": 0, 00:06:49.921 "min_copy_latency_ticks": 0 00:06:49.921 } 00:06:49.921 ] 00:06:49.921 }' 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # jq -r '.channels[0].num_read_ops' 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel1=1521664 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=1521664 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # jq -r '.channels[1].num_read_ops' 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel2=1572608 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@575 -- # io_count_per_channel_all=3094272 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # iostats='{ 00:06:49.921 "tick_rate": 2199996845, 00:06:49.921 "ticks": 770354178761, 00:06:49.921 "bdevs": [ 00:06:49.921 { 00:06:49.921 "name": "Malloc_STAT", 00:06:49.921 "bytes_read": 12903813632, 00:06:49.921 "num_read_ops": 3150339, 00:06:49.921 "bytes_written": 0, 00:06:49.921 "num_write_ops": 0, 00:06:49.921 "bytes_unmapped": 0, 00:06:49.921 "num_unmap_ops": 0, 00:06:49.921 "bytes_copied": 0, 00:06:49.921 "num_copy_ops": 0, 00:06:49.921 "read_latency_ticks": 2335222525673, 00:06:49.921 "max_read_latency_ticks": 1346605, 00:06:49.921 "min_read_latency_ticks": 42881, 00:06:49.921 "write_latency_ticks": 0, 00:06:49.921 "max_write_latency_ticks": 0, 00:06:49.921 "min_write_latency_ticks": 0, 00:06:49.921 "unmap_latency_ticks": 0, 00:06:49.921 "max_unmap_latency_ticks": 0, 00:06:49.921 "min_unmap_latency_ticks": 0, 00:06:49.921 "copy_latency_ticks": 0, 00:06:49.921 "max_copy_latency_ticks": 0, 00:06:49.921 "min_copy_latency_ticks": 0, 00:06:49.921 "io_error": {} 00:06:49.921 } 00:06:49.921 ] 00:06:49.921 }' 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # jq -r '.bdevs[0].num_read_ops' 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # io_count2=3150339 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 3094272 -lt 3056643 ']' 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 3094272 -gt 3150339 ']' 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:06:49.921 00:06:49.921 Latency(us) 00:06:49.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:49.921 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:06:49.921 Malloc_STAT : 2.10 746302.28 2915.24 0.00 0.00 342.74 57.02 614.40 00:06:49.921 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:06:49.921 Malloc_STAT : 2.10 771993.19 3015.60 0.00 0.00 331.33 53.06 472.90 00:06:49.921 =================================================================================================================== 00:06:49.921 Total : 1518295.47 5930.84 0.00 0.00 336.94 53.06 614.40 00:06:49.921 0 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # killprocess 48477 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- common/autotest_common.sh@948 -- # '[' -z 48477 ']' 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- common/autotest_common.sh@952 -- # kill -0 48477 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # uname 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # ps -c -o command 48477 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # tail -1 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:06:49.921 killing process with pid 48477 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48477' 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- common/autotest_common.sh@967 -- # kill 48477 00:06:49.921 Received shutdown signal, test time was about 2.131392 seconds 00:06:49.921 00:06:49.921 Latency(us) 00:06:49.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:49.921 =================================================================================================================== 00:06:49.921 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:49.921 17:25:45 blockdev_general.bdev_stat -- common/autotest_common.sh@972 -- # wait 48477 00:06:50.179 17:25:45 blockdev_general.bdev_stat -- bdev/blockdev.sh@610 -- # trap - SIGINT SIGTERM EXIT 00:06:50.179 00:06:50.179 real 0m3.473s 00:06:50.179 user 0m6.189s 00:06:50.179 sys 0m0.702s 00:06:50.179 ************************************ 00:06:50.179 END TEST bdev_stat 00:06:50.179 ************************************ 00:06:50.179 17:25:45 blockdev_general.bdev_stat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.179 17:25:45 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:06:50.179 17:25:45 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:06:50.179 17:25:45 blockdev_general -- bdev/blockdev.sh@794 -- # [[ bdev == gpt ]] 00:06:50.179 17:25:45 blockdev_general -- bdev/blockdev.sh@798 -- # [[ bdev == crypto_sw ]] 00:06:50.179 17:25:45 blockdev_general -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:06:50.179 17:25:45 blockdev_general -- bdev/blockdev.sh@811 -- # cleanup 00:06:50.179 17:25:45 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:50.179 17:25:45 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:50.179 17:25:45 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:06:50.179 17:25:45 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:06:50.179 17:25:45 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:06:50.179 17:25:45 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:06:50.179 00:06:50.179 real 1m33.698s 00:06:50.179 user 4m28.804s 00:06:50.179 sys 0m26.677s 00:06:50.179 17:25:45 blockdev_general -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.179 17:25:45 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:50.179 ************************************ 00:06:50.179 END TEST blockdev_general 00:06:50.179 ************************************ 00:06:50.179 17:25:45 -- common/autotest_common.sh@1142 -- # return 0 00:06:50.179 17:25:45 -- spdk/autotest.sh@190 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:50.179 17:25:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:50.179 17:25:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.179 17:25:45 -- common/autotest_common.sh@10 -- # set +x 00:06:50.179 ************************************ 00:06:50.179 START TEST bdev_raid 00:06:50.179 ************************************ 00:06:50.179 17:25:46 bdev_raid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:06:50.437 * Looking for test storage... 00:06:50.437 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:50.437 17:25:46 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:50.437 17:25:46 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:06:50.437 17:25:46 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:06:50.437 17:25:46 bdev_raid -- bdev/bdev_raid.sh@851 -- # mkdir -p /raidtest 00:06:50.437 17:25:46 bdev_raid -- bdev/bdev_raid.sh@852 -- # trap 'cleanup; exit 1' EXIT 00:06:50.437 17:25:46 bdev_raid -- bdev/bdev_raid.sh@854 -- # base_blocklen=512 00:06:50.437 17:25:46 bdev_raid -- bdev/bdev_raid.sh@856 -- # uname -s 00:06:50.437 17:25:46 bdev_raid -- bdev/bdev_raid.sh@856 -- # '[' FreeBSD = Linux ']' 00:06:50.437 17:25:46 bdev_raid -- bdev/bdev_raid.sh@863 -- # run_test raid0_resize_test raid0_resize_test 00:06:50.437 17:25:46 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:50.437 17:25:46 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.437 17:25:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:50.437 ************************************ 00:06:50.437 START TEST raid0_resize_test 00:06:50.437 ************************************ 00:06:50.437 17:25:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1123 -- # raid0_resize_test 00:06:50.437 17:25:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local blksize=512 00:06:50.437 17:25:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local bdev_size_mb=32 00:06:50.437 17:25:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local new_bdev_size_mb=64 00:06:50.437 17:25:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local blkcnt 00:06:50.437 17:25:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local raid_size_mb 00:06:50.437 17:25:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local new_raid_size_mb 00:06:50.437 17:25:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@355 -- # raid_pid=48582 00:06:50.437 Process raid pid: 48582 00:06:50.437 17:25:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # echo 'Process raid pid: 48582' 00:06:50.437 17:25:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # waitforlisten 48582 /var/tmp/spdk-raid.sock 00:06:50.437 17:25:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@829 -- # '[' -z 48582 ']' 00:06:50.437 17:25:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:50.437 17:25:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:50.437 17:25:46 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:50.437 17:25:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:50.437 17:25:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.437 17:25:46 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:50.437 [2024-07-15 17:25:46.183298] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:06:50.437 [2024-07-15 17:25:46.183532] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:51.003 EAL: TSC is not safe to use in SMP mode 00:06:51.003 EAL: TSC is not invariant 00:06:51.003 [2024-07-15 17:25:46.747321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.003 [2024-07-15 17:25:46.832252] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:51.003 [2024-07-15 17:25:46.834389] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.261 [2024-07-15 17:25:46.835164] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:51.261 [2024-07-15 17:25:46.835177] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:51.519 17:25:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:51.519 17:25:47 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # return 0 00:06:51.519 17:25:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:06:51.777 Base_1 00:06:51.777 17:25:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:06:52.034 Base_2 00:06:52.034 17:25:47 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:06:52.291 [2024-07-15 17:25:48.006906] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:06:52.291 [2024-07-15 17:25:48.007489] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:06:52.291 [2024-07-15 17:25:48.007514] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x21e67fe34a00 00:06:52.291 [2024-07-15 17:25:48.007519] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:52.291 [2024-07-15 17:25:48.007553] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x21e67fe97e20 00:06:52.291 [2024-07-15 17:25:48.007618] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x21e67fe34a00 00:06:52.291 [2024-07-15 17:25:48.007622] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x21e67fe34a00 00:06:52.291 [2024-07-15 17:25:48.007656] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:52.291 17:25:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:06:52.548 [2024-07-15 17:25:48.290936] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:52.548 [2024-07-15 17:25:48.290960] bdev_raid.c:2276:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:06:52.548 true 00:06:52.548 17:25:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:52.548 17:25:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # jq '.[].num_blocks' 00:06:52.805 [2024-07-15 17:25:48.586952] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:52.805 17:25:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # blkcnt=131072 00:06:52.805 17:25:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # raid_size_mb=64 00:06:52.805 17:25:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@370 -- # '[' 64 '!=' 64 ']' 00:06:52.805 17:25:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:06:53.063 [2024-07-15 17:25:48.870933] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:06:53.063 [2024-07-15 17:25:48.870959] bdev_raid.c:2276:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:06:53.063 [2024-07-15 17:25:48.871005] bdev_raid.c:2290:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:06:53.063 true 00:06:53.063 17:25:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:06:53.063 17:25:48 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # jq '.[].num_blocks' 00:06:53.629 [2024-07-15 17:25:49.170973] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:06:53.629 17:25:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # blkcnt=262144 00:06:53.629 17:25:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # raid_size_mb=128 00:06:53.629 17:25:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 128 '!=' 128 ']' 00:06:53.629 17:25:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@386 -- # killprocess 48582 00:06:53.629 17:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@948 -- # '[' -z 48582 ']' 00:06:53.629 17:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # kill -0 48582 00:06:53.629 17:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # uname 00:06:53.629 17:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:06:53.629 17:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps -c -o command 48582 00:06:53.629 17:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # tail -1 00:06:53.629 17:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:06:53.629 17:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:06:53.629 17:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48582' 00:06:53.629 killing process with pid 48582 00:06:53.629 17:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@967 -- # kill 48582 00:06:53.629 [2024-07-15 17:25:49.198371] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:06:53.629 [2024-07-15 17:25:49.198403] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:06:53.629 [2024-07-15 17:25:49.198415] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:06:53.629 [2024-07-15 17:25:49.198419] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x21e67fe34a00 name Raid, state offline 00:06:53.629 17:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # wait 48582 00:06:53.629 [2024-07-15 17:25:49.198545] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:06:53.629 17:25:49 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@388 -- # return 0 00:06:53.629 00:06:53.629 real 0m3.206s 00:06:53.629 user 0m4.857s 00:06:53.629 sys 0m0.799s 00:06:53.629 17:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.629 17:25:49 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.629 ************************************ 00:06:53.629 END TEST raid0_resize_test 00:06:53.629 ************************************ 00:06:53.629 17:25:49 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:06:53.629 17:25:49 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:06:53.629 17:25:49 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:06:53.629 17:25:49 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:06:53.629 17:25:49 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:06:53.629 17:25:49 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.629 17:25:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:06:53.629 ************************************ 00:06:53.629 START TEST raid_state_function_test 00:06:53.629 ************************************ 00:06:53.629 17:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 false 00:06:53.629 17:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:06:53.629 17:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:06:53.629 17:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:06:53.629 17:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:06:53.629 17:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:06:53.629 17:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:06:53.629 17:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:06:53.629 17:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:06:53.629 17:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:06:53.629 17:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:06:53.629 17:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:06:53.629 17:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:06:53.629 17:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:06:53.629 17:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:06:53.629 17:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:06:53.629 17:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:06:53.629 17:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:06:53.630 17:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:06:53.630 17:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:06:53.630 17:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:06:53.630 17:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:06:53.630 17:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:06:53.630 17:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:06:53.630 17:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=48632 00:06:53.630 Process raid pid: 48632 00:06:53.630 17:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:06:53.630 17:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 48632' 00:06:53.630 17:25:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 48632 /var/tmp/spdk-raid.sock 00:06:53.630 17:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 48632 ']' 00:06:53.630 17:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:06:53.630 17:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:06:53.630 17:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:06:53.630 17:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.630 17:25:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:53.630 [2024-07-15 17:25:49.441505] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:06:53.630 [2024-07-15 17:25:49.441726] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:54.195 EAL: TSC is not safe to use in SMP mode 00:06:54.195 EAL: TSC is not invariant 00:06:54.195 [2024-07-15 17:25:49.997903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.453 [2024-07-15 17:25:50.079378] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:54.453 [2024-07-15 17:25:50.081591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.453 [2024-07-15 17:25:50.082404] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:54.453 [2024-07-15 17:25:50.082416] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:06:54.731 17:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.731 17:25:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:06:54.731 17:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:54.995 [2024-07-15 17:25:50.785551] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:54.995 [2024-07-15 17:25:50.785600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:54.995 [2024-07-15 17:25:50.785606] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:54.995 [2024-07-15 17:25:50.785615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:54.995 17:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:54.995 17:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:54.995 17:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:06:54.995 17:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:54.995 17:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:54.995 17:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:54.995 17:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:54.995 17:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:54.995 17:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:54.995 17:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:54.995 17:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:54.995 17:25:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:55.561 17:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:55.561 "name": "Existed_Raid", 00:06:55.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:55.561 "strip_size_kb": 64, 00:06:55.561 "state": "configuring", 00:06:55.561 "raid_level": "raid0", 00:06:55.561 "superblock": false, 00:06:55.561 "num_base_bdevs": 2, 00:06:55.561 "num_base_bdevs_discovered": 0, 00:06:55.561 "num_base_bdevs_operational": 2, 00:06:55.561 "base_bdevs_list": [ 00:06:55.561 { 00:06:55.561 "name": "BaseBdev1", 00:06:55.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:55.561 "is_configured": false, 00:06:55.561 "data_offset": 0, 00:06:55.561 "data_size": 0 00:06:55.561 }, 00:06:55.561 { 00:06:55.561 "name": "BaseBdev2", 00:06:55.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:55.561 "is_configured": false, 00:06:55.561 "data_offset": 0, 00:06:55.561 "data_size": 0 00:06:55.561 } 00:06:55.561 ] 00:06:55.561 }' 00:06:55.561 17:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:55.561 17:25:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:55.819 17:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:56.077 [2024-07-15 17:25:51.725592] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:56.077 [2024-07-15 17:25:51.725640] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e5d50634500 name Existed_Raid, state configuring 00:06:56.077 17:25:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:56.335 [2024-07-15 17:25:52.001609] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:06:56.335 [2024-07-15 17:25:52.001676] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:06:56.335 [2024-07-15 17:25:52.001681] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:56.335 [2024-07-15 17:25:52.001706] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:56.335 17:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:06:56.592 [2024-07-15 17:25:52.242901] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:56.592 BaseBdev1 00:06:56.592 17:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:06:56.592 17:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:06:56.592 17:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:06:56.592 17:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:06:56.592 17:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:06:56.592 17:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:06:56.592 17:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:56.850 17:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:06:57.108 [ 00:06:57.108 { 00:06:57.109 "name": "BaseBdev1", 00:06:57.109 "aliases": [ 00:06:57.109 "4903af63-42cf-11ef-96ac-773515fba644" 00:06:57.109 ], 00:06:57.109 "product_name": "Malloc disk", 00:06:57.109 "block_size": 512, 00:06:57.109 "num_blocks": 65536, 00:06:57.109 "uuid": "4903af63-42cf-11ef-96ac-773515fba644", 00:06:57.109 "assigned_rate_limits": { 00:06:57.109 "rw_ios_per_sec": 0, 00:06:57.109 "rw_mbytes_per_sec": 0, 00:06:57.109 "r_mbytes_per_sec": 0, 00:06:57.109 "w_mbytes_per_sec": 0 00:06:57.109 }, 00:06:57.109 "claimed": true, 00:06:57.109 "claim_type": "exclusive_write", 00:06:57.109 "zoned": false, 00:06:57.109 "supported_io_types": { 00:06:57.109 "read": true, 00:06:57.109 "write": true, 00:06:57.109 "unmap": true, 00:06:57.109 "flush": true, 00:06:57.109 "reset": true, 00:06:57.109 "nvme_admin": false, 00:06:57.109 "nvme_io": false, 00:06:57.109 "nvme_io_md": false, 00:06:57.109 "write_zeroes": true, 00:06:57.109 "zcopy": true, 00:06:57.109 "get_zone_info": false, 00:06:57.109 "zone_management": false, 00:06:57.109 "zone_append": false, 00:06:57.109 "compare": false, 00:06:57.109 "compare_and_write": false, 00:06:57.109 "abort": true, 00:06:57.109 "seek_hole": false, 00:06:57.109 "seek_data": false, 00:06:57.109 "copy": true, 00:06:57.109 "nvme_iov_md": false 00:06:57.109 }, 00:06:57.109 "memory_domains": [ 00:06:57.109 { 00:06:57.109 "dma_device_id": "system", 00:06:57.109 "dma_device_type": 1 00:06:57.109 }, 00:06:57.109 { 00:06:57.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:57.109 "dma_device_type": 2 00:06:57.109 } 00:06:57.109 ], 00:06:57.109 "driver_specific": {} 00:06:57.109 } 00:06:57.109 ] 00:06:57.109 17:25:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:06:57.109 17:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:57.109 17:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:57.109 17:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:06:57.109 17:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:57.109 17:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:57.109 17:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:57.109 17:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:57.109 17:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:57.109 17:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:57.109 17:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:57.109 17:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:57.109 17:25:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:57.367 17:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:57.367 "name": "Existed_Raid", 00:06:57.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:57.367 "strip_size_kb": 64, 00:06:57.367 "state": "configuring", 00:06:57.367 "raid_level": "raid0", 00:06:57.367 "superblock": false, 00:06:57.367 "num_base_bdevs": 2, 00:06:57.367 "num_base_bdevs_discovered": 1, 00:06:57.367 "num_base_bdevs_operational": 2, 00:06:57.367 "base_bdevs_list": [ 00:06:57.367 { 00:06:57.367 "name": "BaseBdev1", 00:06:57.367 "uuid": "4903af63-42cf-11ef-96ac-773515fba644", 00:06:57.367 "is_configured": true, 00:06:57.367 "data_offset": 0, 00:06:57.367 "data_size": 65536 00:06:57.367 }, 00:06:57.367 { 00:06:57.367 "name": "BaseBdev2", 00:06:57.367 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:57.367 "is_configured": false, 00:06:57.367 "data_offset": 0, 00:06:57.367 "data_size": 0 00:06:57.367 } 00:06:57.367 ] 00:06:57.367 }' 00:06:57.367 17:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:57.367 17:25:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:57.625 17:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:06:57.883 [2024-07-15 17:25:53.677706] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:06:57.883 [2024-07-15 17:25:53.677758] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e5d50634500 name Existed_Raid, state configuring 00:06:57.883 17:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:06:58.154 [2024-07-15 17:25:53.957761] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:06:58.154 [2024-07-15 17:25:53.958650] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:06:58.154 [2024-07-15 17:25:53.958727] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:06:58.154 17:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:06:58.154 17:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:06:58.154 17:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:06:58.154 17:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:58.154 17:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:06:58.154 17:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:58.154 17:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:58.154 17:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:58.154 17:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:58.154 17:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:58.154 17:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:58.154 17:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:58.459 17:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:58.459 17:25:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:06:58.459 17:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:06:58.459 "name": "Existed_Raid", 00:06:58.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:58.459 "strip_size_kb": 64, 00:06:58.459 "state": "configuring", 00:06:58.459 "raid_level": "raid0", 00:06:58.459 "superblock": false, 00:06:58.459 "num_base_bdevs": 2, 00:06:58.459 "num_base_bdevs_discovered": 1, 00:06:58.459 "num_base_bdevs_operational": 2, 00:06:58.459 "base_bdevs_list": [ 00:06:58.459 { 00:06:58.459 "name": "BaseBdev1", 00:06:58.459 "uuid": "4903af63-42cf-11ef-96ac-773515fba644", 00:06:58.459 "is_configured": true, 00:06:58.459 "data_offset": 0, 00:06:58.459 "data_size": 65536 00:06:58.459 }, 00:06:58.459 { 00:06:58.459 "name": "BaseBdev2", 00:06:58.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:06:58.459 "is_configured": false, 00:06:58.459 "data_offset": 0, 00:06:58.459 "data_size": 0 00:06:58.459 } 00:06:58.459 ] 00:06:58.459 }' 00:06:58.459 17:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:06:58.459 17:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:06:59.025 17:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:06:59.283 [2024-07-15 17:25:54.889940] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:06:59.283 [2024-07-15 17:25:54.889968] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1e5d50634a00 00:06:59.283 [2024-07-15 17:25:54.889988] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:06:59.283 [2024-07-15 17:25:54.890010] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e5d50697e20 00:06:59.283 [2024-07-15 17:25:54.890096] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1e5d50634a00 00:06:59.283 [2024-07-15 17:25:54.890100] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1e5d50634a00 00:06:59.283 [2024-07-15 17:25:54.890133] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:06:59.283 BaseBdev2 00:06:59.283 17:25:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:06:59.283 17:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:06:59.283 17:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:06:59.283 17:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:06:59.283 17:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:06:59.283 17:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:06:59.283 17:25:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:06:59.541 17:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:06:59.800 [ 00:06:59.800 { 00:06:59.800 "name": "BaseBdev2", 00:06:59.800 "aliases": [ 00:06:59.800 "4a97c32e-42cf-11ef-96ac-773515fba644" 00:06:59.800 ], 00:06:59.800 "product_name": "Malloc disk", 00:06:59.800 "block_size": 512, 00:06:59.800 "num_blocks": 65536, 00:06:59.800 "uuid": "4a97c32e-42cf-11ef-96ac-773515fba644", 00:06:59.800 "assigned_rate_limits": { 00:06:59.800 "rw_ios_per_sec": 0, 00:06:59.800 "rw_mbytes_per_sec": 0, 00:06:59.800 "r_mbytes_per_sec": 0, 00:06:59.800 "w_mbytes_per_sec": 0 00:06:59.800 }, 00:06:59.800 "claimed": true, 00:06:59.800 "claim_type": "exclusive_write", 00:06:59.800 "zoned": false, 00:06:59.800 "supported_io_types": { 00:06:59.800 "read": true, 00:06:59.800 "write": true, 00:06:59.800 "unmap": true, 00:06:59.800 "flush": true, 00:06:59.800 "reset": true, 00:06:59.800 "nvme_admin": false, 00:06:59.800 "nvme_io": false, 00:06:59.800 "nvme_io_md": false, 00:06:59.800 "write_zeroes": true, 00:06:59.800 "zcopy": true, 00:06:59.800 "get_zone_info": false, 00:06:59.800 "zone_management": false, 00:06:59.800 "zone_append": false, 00:06:59.800 "compare": false, 00:06:59.800 "compare_and_write": false, 00:06:59.800 "abort": true, 00:06:59.800 "seek_hole": false, 00:06:59.800 "seek_data": false, 00:06:59.800 "copy": true, 00:06:59.800 "nvme_iov_md": false 00:06:59.800 }, 00:06:59.800 "memory_domains": [ 00:06:59.800 { 00:06:59.800 "dma_device_id": "system", 00:06:59.800 "dma_device_type": 1 00:06:59.800 }, 00:06:59.800 { 00:06:59.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:59.800 "dma_device_type": 2 00:06:59.800 } 00:06:59.800 ], 00:06:59.800 "driver_specific": {} 00:06:59.800 } 00:06:59.800 ] 00:06:59.800 17:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:06:59.800 17:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:06:59.800 17:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:06:59.800 17:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:06:59.800 17:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:06:59.800 17:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:06:59.800 17:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:06:59.800 17:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:06:59.800 17:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:06:59.800 17:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:06:59.800 17:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:06:59.800 17:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:06:59.800 17:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:06:59.800 17:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:06:59.800 17:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:00.058 17:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:00.058 "name": "Existed_Raid", 00:07:00.058 "uuid": "4a97ca63-42cf-11ef-96ac-773515fba644", 00:07:00.058 "strip_size_kb": 64, 00:07:00.058 "state": "online", 00:07:00.058 "raid_level": "raid0", 00:07:00.058 "superblock": false, 00:07:00.058 "num_base_bdevs": 2, 00:07:00.058 "num_base_bdevs_discovered": 2, 00:07:00.058 "num_base_bdevs_operational": 2, 00:07:00.058 "base_bdevs_list": [ 00:07:00.058 { 00:07:00.058 "name": "BaseBdev1", 00:07:00.058 "uuid": "4903af63-42cf-11ef-96ac-773515fba644", 00:07:00.058 "is_configured": true, 00:07:00.058 "data_offset": 0, 00:07:00.058 "data_size": 65536 00:07:00.058 }, 00:07:00.058 { 00:07:00.058 "name": "BaseBdev2", 00:07:00.058 "uuid": "4a97c32e-42cf-11ef-96ac-773515fba644", 00:07:00.058 "is_configured": true, 00:07:00.058 "data_offset": 0, 00:07:00.058 "data_size": 65536 00:07:00.058 } 00:07:00.058 ] 00:07:00.058 }' 00:07:00.058 17:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:00.058 17:25:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:00.315 17:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:07:00.315 17:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:07:00.315 17:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:00.315 17:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:00.315 17:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:00.315 17:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:00.315 17:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:07:00.315 17:25:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:00.573 [2024-07-15 17:25:56.249892] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:00.573 17:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:00.573 "name": "Existed_Raid", 00:07:00.573 "aliases": [ 00:07:00.573 "4a97ca63-42cf-11ef-96ac-773515fba644" 00:07:00.573 ], 00:07:00.573 "product_name": "Raid Volume", 00:07:00.573 "block_size": 512, 00:07:00.573 "num_blocks": 131072, 00:07:00.573 "uuid": "4a97ca63-42cf-11ef-96ac-773515fba644", 00:07:00.573 "assigned_rate_limits": { 00:07:00.573 "rw_ios_per_sec": 0, 00:07:00.573 "rw_mbytes_per_sec": 0, 00:07:00.573 "r_mbytes_per_sec": 0, 00:07:00.573 "w_mbytes_per_sec": 0 00:07:00.573 }, 00:07:00.573 "claimed": false, 00:07:00.573 "zoned": false, 00:07:00.573 "supported_io_types": { 00:07:00.573 "read": true, 00:07:00.573 "write": true, 00:07:00.573 "unmap": true, 00:07:00.573 "flush": true, 00:07:00.573 "reset": true, 00:07:00.573 "nvme_admin": false, 00:07:00.573 "nvme_io": false, 00:07:00.573 "nvme_io_md": false, 00:07:00.573 "write_zeroes": true, 00:07:00.573 "zcopy": false, 00:07:00.573 "get_zone_info": false, 00:07:00.573 "zone_management": false, 00:07:00.573 "zone_append": false, 00:07:00.573 "compare": false, 00:07:00.573 "compare_and_write": false, 00:07:00.573 "abort": false, 00:07:00.573 "seek_hole": false, 00:07:00.573 "seek_data": false, 00:07:00.573 "copy": false, 00:07:00.573 "nvme_iov_md": false 00:07:00.573 }, 00:07:00.573 "memory_domains": [ 00:07:00.573 { 00:07:00.573 "dma_device_id": "system", 00:07:00.573 "dma_device_type": 1 00:07:00.573 }, 00:07:00.573 { 00:07:00.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.573 "dma_device_type": 2 00:07:00.573 }, 00:07:00.573 { 00:07:00.573 "dma_device_id": "system", 00:07:00.573 "dma_device_type": 1 00:07:00.573 }, 00:07:00.573 { 00:07:00.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.573 "dma_device_type": 2 00:07:00.573 } 00:07:00.573 ], 00:07:00.573 "driver_specific": { 00:07:00.573 "raid": { 00:07:00.573 "uuid": "4a97ca63-42cf-11ef-96ac-773515fba644", 00:07:00.573 "strip_size_kb": 64, 00:07:00.573 "state": "online", 00:07:00.573 "raid_level": "raid0", 00:07:00.573 "superblock": false, 00:07:00.573 "num_base_bdevs": 2, 00:07:00.573 "num_base_bdevs_discovered": 2, 00:07:00.573 "num_base_bdevs_operational": 2, 00:07:00.573 "base_bdevs_list": [ 00:07:00.573 { 00:07:00.573 "name": "BaseBdev1", 00:07:00.573 "uuid": "4903af63-42cf-11ef-96ac-773515fba644", 00:07:00.573 "is_configured": true, 00:07:00.573 "data_offset": 0, 00:07:00.573 "data_size": 65536 00:07:00.573 }, 00:07:00.573 { 00:07:00.573 "name": "BaseBdev2", 00:07:00.573 "uuid": "4a97c32e-42cf-11ef-96ac-773515fba644", 00:07:00.573 "is_configured": true, 00:07:00.573 "data_offset": 0, 00:07:00.573 "data_size": 65536 00:07:00.573 } 00:07:00.573 ] 00:07:00.573 } 00:07:00.573 } 00:07:00.573 }' 00:07:00.573 17:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:00.573 17:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:07:00.573 BaseBdev2' 00:07:00.573 17:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:00.573 17:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:07:00.573 17:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:00.829 17:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:00.829 "name": "BaseBdev1", 00:07:00.829 "aliases": [ 00:07:00.829 "4903af63-42cf-11ef-96ac-773515fba644" 00:07:00.829 ], 00:07:00.829 "product_name": "Malloc disk", 00:07:00.829 "block_size": 512, 00:07:00.829 "num_blocks": 65536, 00:07:00.829 "uuid": "4903af63-42cf-11ef-96ac-773515fba644", 00:07:00.829 "assigned_rate_limits": { 00:07:00.829 "rw_ios_per_sec": 0, 00:07:00.829 "rw_mbytes_per_sec": 0, 00:07:00.829 "r_mbytes_per_sec": 0, 00:07:00.829 "w_mbytes_per_sec": 0 00:07:00.829 }, 00:07:00.829 "claimed": true, 00:07:00.829 "claim_type": "exclusive_write", 00:07:00.829 "zoned": false, 00:07:00.829 "supported_io_types": { 00:07:00.829 "read": true, 00:07:00.829 "write": true, 00:07:00.829 "unmap": true, 00:07:00.829 "flush": true, 00:07:00.829 "reset": true, 00:07:00.829 "nvme_admin": false, 00:07:00.829 "nvme_io": false, 00:07:00.829 "nvme_io_md": false, 00:07:00.829 "write_zeroes": true, 00:07:00.829 "zcopy": true, 00:07:00.829 "get_zone_info": false, 00:07:00.829 "zone_management": false, 00:07:00.829 "zone_append": false, 00:07:00.829 "compare": false, 00:07:00.829 "compare_and_write": false, 00:07:00.829 "abort": true, 00:07:00.829 "seek_hole": false, 00:07:00.829 "seek_data": false, 00:07:00.829 "copy": true, 00:07:00.829 "nvme_iov_md": false 00:07:00.829 }, 00:07:00.829 "memory_domains": [ 00:07:00.829 { 00:07:00.829 "dma_device_id": "system", 00:07:00.829 "dma_device_type": 1 00:07:00.829 }, 00:07:00.829 { 00:07:00.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.829 "dma_device_type": 2 00:07:00.829 } 00:07:00.829 ], 00:07:00.829 "driver_specific": {} 00:07:00.829 }' 00:07:00.829 17:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:00.829 17:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:00.829 17:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:00.829 17:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:00.829 17:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:00.829 17:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:00.829 17:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:00.829 17:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:00.829 17:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:00.829 17:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:00.829 17:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:00.829 17:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:00.829 17:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:00.830 17:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:07:00.830 17:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:01.087 17:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:01.087 "name": "BaseBdev2", 00:07:01.087 "aliases": [ 00:07:01.087 "4a97c32e-42cf-11ef-96ac-773515fba644" 00:07:01.087 ], 00:07:01.087 "product_name": "Malloc disk", 00:07:01.087 "block_size": 512, 00:07:01.087 "num_blocks": 65536, 00:07:01.087 "uuid": "4a97c32e-42cf-11ef-96ac-773515fba644", 00:07:01.087 "assigned_rate_limits": { 00:07:01.087 "rw_ios_per_sec": 0, 00:07:01.087 "rw_mbytes_per_sec": 0, 00:07:01.087 "r_mbytes_per_sec": 0, 00:07:01.087 "w_mbytes_per_sec": 0 00:07:01.087 }, 00:07:01.087 "claimed": true, 00:07:01.087 "claim_type": "exclusive_write", 00:07:01.087 "zoned": false, 00:07:01.087 "supported_io_types": { 00:07:01.087 "read": true, 00:07:01.087 "write": true, 00:07:01.087 "unmap": true, 00:07:01.087 "flush": true, 00:07:01.087 "reset": true, 00:07:01.087 "nvme_admin": false, 00:07:01.087 "nvme_io": false, 00:07:01.087 "nvme_io_md": false, 00:07:01.087 "write_zeroes": true, 00:07:01.087 "zcopy": true, 00:07:01.087 "get_zone_info": false, 00:07:01.087 "zone_management": false, 00:07:01.087 "zone_append": false, 00:07:01.087 "compare": false, 00:07:01.087 "compare_and_write": false, 00:07:01.087 "abort": true, 00:07:01.087 "seek_hole": false, 00:07:01.087 "seek_data": false, 00:07:01.087 "copy": true, 00:07:01.087 "nvme_iov_md": false 00:07:01.087 }, 00:07:01.087 "memory_domains": [ 00:07:01.087 { 00:07:01.087 "dma_device_id": "system", 00:07:01.087 "dma_device_type": 1 00:07:01.087 }, 00:07:01.087 { 00:07:01.087 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:01.087 "dma_device_type": 2 00:07:01.087 } 00:07:01.087 ], 00:07:01.087 "driver_specific": {} 00:07:01.087 }' 00:07:01.087 17:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:01.087 17:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:01.087 17:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:01.087 17:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:01.087 17:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:01.087 17:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:01.087 17:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:01.087 17:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:01.087 17:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:01.087 17:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:01.344 17:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:01.344 17:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:01.344 17:25:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:01.603 [2024-07-15 17:25:57.193882] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:01.603 [2024-07-15 17:25:57.193908] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:01.603 [2024-07-15 17:25:57.193932] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:01.603 17:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:07:01.603 17:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:07:01.603 17:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:01.603 17:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:01.603 17:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:07:01.603 17:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:01.603 17:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:01.603 17:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:07:01.603 17:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:01.603 17:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:01.603 17:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:07:01.603 17:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:01.603 17:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:01.603 17:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:01.603 17:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:01.603 17:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:01.603 17:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:01.864 17:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:01.864 "name": "Existed_Raid", 00:07:01.864 "uuid": "4a97ca63-42cf-11ef-96ac-773515fba644", 00:07:01.864 "strip_size_kb": 64, 00:07:01.864 "state": "offline", 00:07:01.864 "raid_level": "raid0", 00:07:01.864 "superblock": false, 00:07:01.864 "num_base_bdevs": 2, 00:07:01.864 "num_base_bdevs_discovered": 1, 00:07:01.864 "num_base_bdevs_operational": 1, 00:07:01.864 "base_bdevs_list": [ 00:07:01.864 { 00:07:01.864 "name": null, 00:07:01.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:01.864 "is_configured": false, 00:07:01.864 "data_offset": 0, 00:07:01.864 "data_size": 65536 00:07:01.864 }, 00:07:01.864 { 00:07:01.864 "name": "BaseBdev2", 00:07:01.864 "uuid": "4a97c32e-42cf-11ef-96ac-773515fba644", 00:07:01.864 "is_configured": true, 00:07:01.864 "data_offset": 0, 00:07:01.864 "data_size": 65536 00:07:01.864 } 00:07:01.864 ] 00:07:01.864 }' 00:07:01.864 17:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:01.864 17:25:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:02.122 17:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:07:02.122 17:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:02.122 17:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:02.122 17:25:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:07:02.379 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:07:02.379 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:02.379 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:02.638 [2024-07-15 17:25:58.335793] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:02.638 [2024-07-15 17:25:58.335830] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e5d50634a00 name Existed_Raid, state offline 00:07:02.638 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:07:02.638 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:02.638 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:07:02.638 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:02.896 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:07:02.896 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:07:02.896 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:07:02.896 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 48632 00:07:02.896 17:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 48632 ']' 00:07:02.896 17:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 48632 00:07:02.896 17:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:07:02.896 17:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:02.896 17:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 48632 00:07:02.896 17:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:07:02.896 17:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:07:02.896 17:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:07:02.896 killing process with pid 48632 00:07:02.896 17:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48632' 00:07:02.896 17:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 48632 00:07:02.896 [2024-07-15 17:25:58.660258] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:02.896 [2024-07-15 17:25:58.660292] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:02.896 17:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 48632 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:07:03.154 00:07:03.154 real 0m9.408s 00:07:03.154 user 0m16.453s 00:07:03.154 sys 0m1.615s 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:03.154 ************************************ 00:07:03.154 END TEST raid_state_function_test 00:07:03.154 ************************************ 00:07:03.154 17:25:58 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:07:03.154 17:25:58 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:03.154 17:25:58 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:03.154 17:25:58 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.154 17:25:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:03.154 ************************************ 00:07:03.154 START TEST raid_state_function_test_sb 00:07:03.154 ************************************ 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 true 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=48907 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 48907' 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 48907 /var/tmp/spdk-raid.sock 00:07:03.154 Process raid pid: 48907 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 48907 ']' 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:03.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:03.154 17:25:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:03.155 [2024-07-15 17:25:58.890824] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:07:03.155 [2024-07-15 17:25:58.891084] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:03.721 EAL: TSC is not safe to use in SMP mode 00:07:03.721 EAL: TSC is not invariant 00:07:03.721 [2024-07-15 17:25:59.445960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.721 [2024-07-15 17:25:59.541837] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:03.721 [2024-07-15 17:25:59.543932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.721 [2024-07-15 17:25:59.544764] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:03.721 [2024-07-15 17:25:59.544779] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:04.286 17:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:04.286 17:25:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:07:04.286 17:25:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:04.545 [2024-07-15 17:26:00.209312] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:04.545 [2024-07-15 17:26:00.209369] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:04.545 [2024-07-15 17:26:00.209379] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:04.545 [2024-07-15 17:26:00.209394] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:04.545 17:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:04.545 17:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:04.545 17:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:04.545 17:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:04.545 17:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:04.545 17:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:04.545 17:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:04.545 17:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:04.545 17:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:04.545 17:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:04.545 17:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:04.545 17:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:04.802 17:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:04.802 "name": "Existed_Raid", 00:07:04.802 "uuid": "4dc373f4-42cf-11ef-96ac-773515fba644", 00:07:04.802 "strip_size_kb": 64, 00:07:04.802 "state": "configuring", 00:07:04.802 "raid_level": "raid0", 00:07:04.802 "superblock": true, 00:07:04.802 "num_base_bdevs": 2, 00:07:04.802 "num_base_bdevs_discovered": 0, 00:07:04.802 "num_base_bdevs_operational": 2, 00:07:04.802 "base_bdevs_list": [ 00:07:04.802 { 00:07:04.802 "name": "BaseBdev1", 00:07:04.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:04.802 "is_configured": false, 00:07:04.802 "data_offset": 0, 00:07:04.802 "data_size": 0 00:07:04.802 }, 00:07:04.802 { 00:07:04.802 "name": "BaseBdev2", 00:07:04.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:04.802 "is_configured": false, 00:07:04.802 "data_offset": 0, 00:07:04.802 "data_size": 0 00:07:04.802 } 00:07:04.802 ] 00:07:04.802 }' 00:07:04.802 17:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:04.802 17:26:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:05.059 17:26:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:05.317 [2024-07-15 17:26:01.113299] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:05.317 [2024-07-15 17:26:01.113330] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x28dd8da34500 name Existed_Raid, state configuring 00:07:05.317 17:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:05.578 [2024-07-15 17:26:01.353311] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:05.578 [2024-07-15 17:26:01.353378] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:05.578 [2024-07-15 17:26:01.353387] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:05.578 [2024-07-15 17:26:01.353403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:05.578 17:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:05.836 [2024-07-15 17:26:01.606316] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:05.836 BaseBdev1 00:07:05.836 17:26:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:07:05.836 17:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:07:05.836 17:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:05.836 17:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:07:05.836 17:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:05.836 17:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:05.836 17:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:06.092 17:26:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:06.349 [ 00:07:06.349 { 00:07:06.349 "name": "BaseBdev1", 00:07:06.349 "aliases": [ 00:07:06.349 "4e9877bd-42cf-11ef-96ac-773515fba644" 00:07:06.349 ], 00:07:06.349 "product_name": "Malloc disk", 00:07:06.349 "block_size": 512, 00:07:06.349 "num_blocks": 65536, 00:07:06.349 "uuid": "4e9877bd-42cf-11ef-96ac-773515fba644", 00:07:06.349 "assigned_rate_limits": { 00:07:06.349 "rw_ios_per_sec": 0, 00:07:06.349 "rw_mbytes_per_sec": 0, 00:07:06.349 "r_mbytes_per_sec": 0, 00:07:06.349 "w_mbytes_per_sec": 0 00:07:06.349 }, 00:07:06.349 "claimed": true, 00:07:06.349 "claim_type": "exclusive_write", 00:07:06.349 "zoned": false, 00:07:06.349 "supported_io_types": { 00:07:06.349 "read": true, 00:07:06.349 "write": true, 00:07:06.349 "unmap": true, 00:07:06.349 "flush": true, 00:07:06.349 "reset": true, 00:07:06.349 "nvme_admin": false, 00:07:06.349 "nvme_io": false, 00:07:06.349 "nvme_io_md": false, 00:07:06.349 "write_zeroes": true, 00:07:06.349 "zcopy": true, 00:07:06.349 "get_zone_info": false, 00:07:06.349 "zone_management": false, 00:07:06.349 "zone_append": false, 00:07:06.349 "compare": false, 00:07:06.349 "compare_and_write": false, 00:07:06.349 "abort": true, 00:07:06.349 "seek_hole": false, 00:07:06.349 "seek_data": false, 00:07:06.349 "copy": true, 00:07:06.349 "nvme_iov_md": false 00:07:06.349 }, 00:07:06.349 "memory_domains": [ 00:07:06.349 { 00:07:06.349 "dma_device_id": "system", 00:07:06.349 "dma_device_type": 1 00:07:06.349 }, 00:07:06.349 { 00:07:06.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:06.349 "dma_device_type": 2 00:07:06.349 } 00:07:06.349 ], 00:07:06.349 "driver_specific": {} 00:07:06.349 } 00:07:06.349 ] 00:07:06.349 17:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:07:06.349 17:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:06.349 17:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:06.349 17:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:06.349 17:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:06.349 17:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:06.349 17:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:06.349 17:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:06.349 17:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:06.349 17:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:06.349 17:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:06.349 17:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:06.349 17:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:06.607 17:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:06.607 "name": "Existed_Raid", 00:07:06.607 "uuid": "4e720377-42cf-11ef-96ac-773515fba644", 00:07:06.607 "strip_size_kb": 64, 00:07:06.607 "state": "configuring", 00:07:06.607 "raid_level": "raid0", 00:07:06.607 "superblock": true, 00:07:06.607 "num_base_bdevs": 2, 00:07:06.607 "num_base_bdevs_discovered": 1, 00:07:06.607 "num_base_bdevs_operational": 2, 00:07:06.607 "base_bdevs_list": [ 00:07:06.607 { 00:07:06.607 "name": "BaseBdev1", 00:07:06.607 "uuid": "4e9877bd-42cf-11ef-96ac-773515fba644", 00:07:06.607 "is_configured": true, 00:07:06.607 "data_offset": 2048, 00:07:06.607 "data_size": 63488 00:07:06.607 }, 00:07:06.607 { 00:07:06.607 "name": "BaseBdev2", 00:07:06.607 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:06.607 "is_configured": false, 00:07:06.607 "data_offset": 0, 00:07:06.607 "data_size": 0 00:07:06.607 } 00:07:06.607 ] 00:07:06.607 }' 00:07:06.607 17:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:06.607 17:26:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:06.864 17:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:07.122 [2024-07-15 17:26:02.901324] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:07.122 [2024-07-15 17:26:02.901353] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x28dd8da34500 name Existed_Raid, state configuring 00:07:07.122 17:26:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:07.379 [2024-07-15 17:26:03.141355] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:07.379 [2024-07-15 17:26:03.142168] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:07.379 [2024-07-15 17:26:03.142208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:07.379 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:07:07.379 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:07.379 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:07.379 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:07.379 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:07.379 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:07.379 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:07.379 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:07.379 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:07.380 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:07.380 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:07.380 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:07.380 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:07.380 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:07.642 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:07.642 "name": "Existed_Raid", 00:07:07.642 "uuid": "4f82d8c5-42cf-11ef-96ac-773515fba644", 00:07:07.642 "strip_size_kb": 64, 00:07:07.642 "state": "configuring", 00:07:07.642 "raid_level": "raid0", 00:07:07.642 "superblock": true, 00:07:07.642 "num_base_bdevs": 2, 00:07:07.642 "num_base_bdevs_discovered": 1, 00:07:07.642 "num_base_bdevs_operational": 2, 00:07:07.642 "base_bdevs_list": [ 00:07:07.642 { 00:07:07.642 "name": "BaseBdev1", 00:07:07.642 "uuid": "4e9877bd-42cf-11ef-96ac-773515fba644", 00:07:07.642 "is_configured": true, 00:07:07.642 "data_offset": 2048, 00:07:07.642 "data_size": 63488 00:07:07.642 }, 00:07:07.642 { 00:07:07.642 "name": "BaseBdev2", 00:07:07.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:07.642 "is_configured": false, 00:07:07.642 "data_offset": 0, 00:07:07.642 "data_size": 0 00:07:07.642 } 00:07:07.642 ] 00:07:07.642 }' 00:07:07.642 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:07.642 17:26:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:07.902 17:26:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:08.467 [2024-07-15 17:26:04.001501] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:08.467 [2024-07-15 17:26:04.001587] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x28dd8da34a00 00:07:08.467 [2024-07-15 17:26:04.001594] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:08.467 [2024-07-15 17:26:04.001616] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x28dd8da97e20 00:07:08.467 [2024-07-15 17:26:04.001675] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x28dd8da34a00 00:07:08.467 [2024-07-15 17:26:04.001680] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x28dd8da34a00 00:07:08.467 [2024-07-15 17:26:04.001701] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:08.467 BaseBdev2 00:07:08.467 17:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:07:08.467 17:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:07:08.467 17:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:08.467 17:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:07:08.467 17:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:08.467 17:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:08.467 17:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:08.467 17:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:08.724 [ 00:07:08.724 { 00:07:08.724 "name": "BaseBdev2", 00:07:08.724 "aliases": [ 00:07:08.724 "50061315-42cf-11ef-96ac-773515fba644" 00:07:08.724 ], 00:07:08.724 "product_name": "Malloc disk", 00:07:08.724 "block_size": 512, 00:07:08.724 "num_blocks": 65536, 00:07:08.724 "uuid": "50061315-42cf-11ef-96ac-773515fba644", 00:07:08.724 "assigned_rate_limits": { 00:07:08.724 "rw_ios_per_sec": 0, 00:07:08.724 "rw_mbytes_per_sec": 0, 00:07:08.724 "r_mbytes_per_sec": 0, 00:07:08.724 "w_mbytes_per_sec": 0 00:07:08.724 }, 00:07:08.724 "claimed": true, 00:07:08.724 "claim_type": "exclusive_write", 00:07:08.724 "zoned": false, 00:07:08.724 "supported_io_types": { 00:07:08.724 "read": true, 00:07:08.724 "write": true, 00:07:08.724 "unmap": true, 00:07:08.724 "flush": true, 00:07:08.724 "reset": true, 00:07:08.724 "nvme_admin": false, 00:07:08.724 "nvme_io": false, 00:07:08.724 "nvme_io_md": false, 00:07:08.724 "write_zeroes": true, 00:07:08.724 "zcopy": true, 00:07:08.724 "get_zone_info": false, 00:07:08.724 "zone_management": false, 00:07:08.724 "zone_append": false, 00:07:08.724 "compare": false, 00:07:08.724 "compare_and_write": false, 00:07:08.724 "abort": true, 00:07:08.724 "seek_hole": false, 00:07:08.724 "seek_data": false, 00:07:08.724 "copy": true, 00:07:08.724 "nvme_iov_md": false 00:07:08.724 }, 00:07:08.724 "memory_domains": [ 00:07:08.724 { 00:07:08.725 "dma_device_id": "system", 00:07:08.725 "dma_device_type": 1 00:07:08.725 }, 00:07:08.725 { 00:07:08.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:08.725 "dma_device_type": 2 00:07:08.725 } 00:07:08.725 ], 00:07:08.725 "driver_specific": {} 00:07:08.725 } 00:07:08.725 ] 00:07:08.725 17:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:07:08.725 17:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:07:08.725 17:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:08.725 17:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:08.725 17:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:08.725 17:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:08.725 17:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:08.725 17:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:08.725 17:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:08.981 17:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:08.981 17:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:08.981 17:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:08.981 17:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:08.981 17:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:08.981 17:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:09.238 17:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:09.238 "name": "Existed_Raid", 00:07:09.238 "uuid": "4f82d8c5-42cf-11ef-96ac-773515fba644", 00:07:09.238 "strip_size_kb": 64, 00:07:09.238 "state": "online", 00:07:09.238 "raid_level": "raid0", 00:07:09.238 "superblock": true, 00:07:09.238 "num_base_bdevs": 2, 00:07:09.238 "num_base_bdevs_discovered": 2, 00:07:09.238 "num_base_bdevs_operational": 2, 00:07:09.238 "base_bdevs_list": [ 00:07:09.238 { 00:07:09.238 "name": "BaseBdev1", 00:07:09.238 "uuid": "4e9877bd-42cf-11ef-96ac-773515fba644", 00:07:09.238 "is_configured": true, 00:07:09.238 "data_offset": 2048, 00:07:09.238 "data_size": 63488 00:07:09.238 }, 00:07:09.238 { 00:07:09.238 "name": "BaseBdev2", 00:07:09.238 "uuid": "50061315-42cf-11ef-96ac-773515fba644", 00:07:09.238 "is_configured": true, 00:07:09.238 "data_offset": 2048, 00:07:09.238 "data_size": 63488 00:07:09.238 } 00:07:09.238 ] 00:07:09.238 }' 00:07:09.238 17:26:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:09.238 17:26:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:09.496 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:07:09.496 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:07:09.496 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:09.496 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:09.496 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:09.496 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:07:09.496 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:07:09.496 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:09.754 [2024-07-15 17:26:05.433422] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:09.754 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:09.754 "name": "Existed_Raid", 00:07:09.754 "aliases": [ 00:07:09.754 "4f82d8c5-42cf-11ef-96ac-773515fba644" 00:07:09.754 ], 00:07:09.754 "product_name": "Raid Volume", 00:07:09.754 "block_size": 512, 00:07:09.754 "num_blocks": 126976, 00:07:09.754 "uuid": "4f82d8c5-42cf-11ef-96ac-773515fba644", 00:07:09.754 "assigned_rate_limits": { 00:07:09.754 "rw_ios_per_sec": 0, 00:07:09.754 "rw_mbytes_per_sec": 0, 00:07:09.754 "r_mbytes_per_sec": 0, 00:07:09.754 "w_mbytes_per_sec": 0 00:07:09.754 }, 00:07:09.754 "claimed": false, 00:07:09.754 "zoned": false, 00:07:09.754 "supported_io_types": { 00:07:09.754 "read": true, 00:07:09.754 "write": true, 00:07:09.754 "unmap": true, 00:07:09.754 "flush": true, 00:07:09.754 "reset": true, 00:07:09.754 "nvme_admin": false, 00:07:09.754 "nvme_io": false, 00:07:09.754 "nvme_io_md": false, 00:07:09.754 "write_zeroes": true, 00:07:09.754 "zcopy": false, 00:07:09.754 "get_zone_info": false, 00:07:09.754 "zone_management": false, 00:07:09.754 "zone_append": false, 00:07:09.754 "compare": false, 00:07:09.754 "compare_and_write": false, 00:07:09.754 "abort": false, 00:07:09.754 "seek_hole": false, 00:07:09.754 "seek_data": false, 00:07:09.754 "copy": false, 00:07:09.754 "nvme_iov_md": false 00:07:09.754 }, 00:07:09.754 "memory_domains": [ 00:07:09.754 { 00:07:09.754 "dma_device_id": "system", 00:07:09.754 "dma_device_type": 1 00:07:09.754 }, 00:07:09.754 { 00:07:09.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.754 "dma_device_type": 2 00:07:09.754 }, 00:07:09.754 { 00:07:09.754 "dma_device_id": "system", 00:07:09.754 "dma_device_type": 1 00:07:09.754 }, 00:07:09.754 { 00:07:09.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.754 "dma_device_type": 2 00:07:09.754 } 00:07:09.754 ], 00:07:09.754 "driver_specific": { 00:07:09.754 "raid": { 00:07:09.754 "uuid": "4f82d8c5-42cf-11ef-96ac-773515fba644", 00:07:09.754 "strip_size_kb": 64, 00:07:09.754 "state": "online", 00:07:09.754 "raid_level": "raid0", 00:07:09.754 "superblock": true, 00:07:09.754 "num_base_bdevs": 2, 00:07:09.754 "num_base_bdevs_discovered": 2, 00:07:09.754 "num_base_bdevs_operational": 2, 00:07:09.754 "base_bdevs_list": [ 00:07:09.754 { 00:07:09.754 "name": "BaseBdev1", 00:07:09.754 "uuid": "4e9877bd-42cf-11ef-96ac-773515fba644", 00:07:09.754 "is_configured": true, 00:07:09.754 "data_offset": 2048, 00:07:09.754 "data_size": 63488 00:07:09.754 }, 00:07:09.754 { 00:07:09.754 "name": "BaseBdev2", 00:07:09.754 "uuid": "50061315-42cf-11ef-96ac-773515fba644", 00:07:09.754 "is_configured": true, 00:07:09.754 "data_offset": 2048, 00:07:09.754 "data_size": 63488 00:07:09.754 } 00:07:09.754 ] 00:07:09.754 } 00:07:09.754 } 00:07:09.754 }' 00:07:09.754 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:09.754 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:07:09.754 BaseBdev2' 00:07:09.754 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:09.754 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:07:09.754 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:10.012 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:10.012 "name": "BaseBdev1", 00:07:10.012 "aliases": [ 00:07:10.012 "4e9877bd-42cf-11ef-96ac-773515fba644" 00:07:10.012 ], 00:07:10.012 "product_name": "Malloc disk", 00:07:10.012 "block_size": 512, 00:07:10.012 "num_blocks": 65536, 00:07:10.012 "uuid": "4e9877bd-42cf-11ef-96ac-773515fba644", 00:07:10.012 "assigned_rate_limits": { 00:07:10.012 "rw_ios_per_sec": 0, 00:07:10.012 "rw_mbytes_per_sec": 0, 00:07:10.012 "r_mbytes_per_sec": 0, 00:07:10.012 "w_mbytes_per_sec": 0 00:07:10.012 }, 00:07:10.012 "claimed": true, 00:07:10.012 "claim_type": "exclusive_write", 00:07:10.012 "zoned": false, 00:07:10.012 "supported_io_types": { 00:07:10.012 "read": true, 00:07:10.012 "write": true, 00:07:10.012 "unmap": true, 00:07:10.012 "flush": true, 00:07:10.012 "reset": true, 00:07:10.012 "nvme_admin": false, 00:07:10.012 "nvme_io": false, 00:07:10.012 "nvme_io_md": false, 00:07:10.012 "write_zeroes": true, 00:07:10.012 "zcopy": true, 00:07:10.012 "get_zone_info": false, 00:07:10.012 "zone_management": false, 00:07:10.012 "zone_append": false, 00:07:10.012 "compare": false, 00:07:10.012 "compare_and_write": false, 00:07:10.012 "abort": true, 00:07:10.012 "seek_hole": false, 00:07:10.012 "seek_data": false, 00:07:10.012 "copy": true, 00:07:10.012 "nvme_iov_md": false 00:07:10.012 }, 00:07:10.012 "memory_domains": [ 00:07:10.012 { 00:07:10.012 "dma_device_id": "system", 00:07:10.012 "dma_device_type": 1 00:07:10.012 }, 00:07:10.012 { 00:07:10.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.012 "dma_device_type": 2 00:07:10.012 } 00:07:10.012 ], 00:07:10.012 "driver_specific": {} 00:07:10.012 }' 00:07:10.012 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:10.012 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:10.012 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:10.012 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:10.012 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:10.012 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:10.012 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:10.012 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:10.012 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:10.012 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:10.012 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:10.012 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:10.012 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:10.012 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:07:10.012 17:26:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:10.270 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:10.270 "name": "BaseBdev2", 00:07:10.270 "aliases": [ 00:07:10.270 "50061315-42cf-11ef-96ac-773515fba644" 00:07:10.270 ], 00:07:10.270 "product_name": "Malloc disk", 00:07:10.270 "block_size": 512, 00:07:10.270 "num_blocks": 65536, 00:07:10.270 "uuid": "50061315-42cf-11ef-96ac-773515fba644", 00:07:10.270 "assigned_rate_limits": { 00:07:10.270 "rw_ios_per_sec": 0, 00:07:10.270 "rw_mbytes_per_sec": 0, 00:07:10.270 "r_mbytes_per_sec": 0, 00:07:10.270 "w_mbytes_per_sec": 0 00:07:10.270 }, 00:07:10.270 "claimed": true, 00:07:10.270 "claim_type": "exclusive_write", 00:07:10.270 "zoned": false, 00:07:10.270 "supported_io_types": { 00:07:10.270 "read": true, 00:07:10.270 "write": true, 00:07:10.270 "unmap": true, 00:07:10.270 "flush": true, 00:07:10.270 "reset": true, 00:07:10.270 "nvme_admin": false, 00:07:10.270 "nvme_io": false, 00:07:10.270 "nvme_io_md": false, 00:07:10.270 "write_zeroes": true, 00:07:10.270 "zcopy": true, 00:07:10.270 "get_zone_info": false, 00:07:10.270 "zone_management": false, 00:07:10.270 "zone_append": false, 00:07:10.270 "compare": false, 00:07:10.270 "compare_and_write": false, 00:07:10.270 "abort": true, 00:07:10.270 "seek_hole": false, 00:07:10.270 "seek_data": false, 00:07:10.270 "copy": true, 00:07:10.270 "nvme_iov_md": false 00:07:10.270 }, 00:07:10.270 "memory_domains": [ 00:07:10.270 { 00:07:10.270 "dma_device_id": "system", 00:07:10.270 "dma_device_type": 1 00:07:10.270 }, 00:07:10.270 { 00:07:10.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:10.270 "dma_device_type": 2 00:07:10.271 } 00:07:10.271 ], 00:07:10.271 "driver_specific": {} 00:07:10.271 }' 00:07:10.271 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:10.271 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:10.271 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:10.271 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:10.271 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:10.271 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:10.271 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:10.271 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:10.271 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:10.271 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:10.271 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:10.271 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:10.271 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:10.528 [2024-07-15 17:26:06.297425] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:10.528 [2024-07-15 17:26:06.297459] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:10.528 [2024-07-15 17:26:06.297485] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:10.528 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:07:10.528 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:07:10.528 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:10.528 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:07:10.528 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:07:10.528 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:10.528 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:10.528 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:07:10.528 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:10.528 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:10.528 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:07:10.528 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:10.528 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:10.529 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:10.529 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:10.529 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:10.529 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:10.787 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:10.787 "name": "Existed_Raid", 00:07:10.787 "uuid": "4f82d8c5-42cf-11ef-96ac-773515fba644", 00:07:10.787 "strip_size_kb": 64, 00:07:10.787 "state": "offline", 00:07:10.787 "raid_level": "raid0", 00:07:10.787 "superblock": true, 00:07:10.787 "num_base_bdevs": 2, 00:07:10.788 "num_base_bdevs_discovered": 1, 00:07:10.788 "num_base_bdevs_operational": 1, 00:07:10.788 "base_bdevs_list": [ 00:07:10.788 { 00:07:10.788 "name": null, 00:07:10.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:10.788 "is_configured": false, 00:07:10.788 "data_offset": 2048, 00:07:10.788 "data_size": 63488 00:07:10.788 }, 00:07:10.788 { 00:07:10.788 "name": "BaseBdev2", 00:07:10.788 "uuid": "50061315-42cf-11ef-96ac-773515fba644", 00:07:10.788 "is_configured": true, 00:07:10.788 "data_offset": 2048, 00:07:10.788 "data_size": 63488 00:07:10.788 } 00:07:10.788 ] 00:07:10.788 }' 00:07:10.788 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:10.788 17:26:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:11.353 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:07:11.353 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:11.353 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:11.353 17:26:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:07:11.353 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:07:11.353 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:11.353 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:11.610 [2024-07-15 17:26:07.395654] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:11.610 [2024-07-15 17:26:07.395691] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x28dd8da34a00 name Existed_Raid, state offline 00:07:11.610 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:07:11.610 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:11.610 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:07:11.610 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:11.869 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:07:11.869 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:07:11.869 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:07:11.869 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 48907 00:07:11.869 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 48907 ']' 00:07:11.869 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 48907 00:07:11.869 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:07:11.869 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:11.869 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 48907 00:07:11.869 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:07:11.869 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:07:11.869 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:07:11.869 killing process with pid 48907 00:07:11.869 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48907' 00:07:11.869 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 48907 00:07:11.869 [2024-07-15 17:26:07.684706] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:11.869 [2024-07-15 17:26:07.684741] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:11.869 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 48907 00:07:12.127 17:26:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:07:12.127 00:07:12.127 real 0m8.986s 00:07:12.127 user 0m15.648s 00:07:12.127 sys 0m1.575s 00:07:12.127 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.127 ************************************ 00:07:12.127 END TEST raid_state_function_test_sb 00:07:12.127 ************************************ 00:07:12.127 17:26:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:12.127 17:26:07 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:07:12.127 17:26:07 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:12.127 17:26:07 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:12.127 17:26:07 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.127 17:26:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:12.127 ************************************ 00:07:12.127 START TEST raid_superblock_test 00:07:12.127 ************************************ 00:07:12.127 17:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 2 00:07:12.127 17:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:07:12.127 17:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:07:12.127 17:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:07:12.127 17:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:07:12.127 17:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:07:12.127 17:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:07:12.127 17:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:07:12.127 17:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:07:12.127 17:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:07:12.127 17:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:07:12.127 17:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:07:12.127 17:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:07:12.127 17:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:07:12.127 17:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:07:12.127 17:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:07:12.127 17:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:07:12.127 17:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=49177 00:07:12.127 17:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 49177 /var/tmp/spdk-raid.sock 00:07:12.127 17:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 49177 ']' 00:07:12.127 17:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:12.127 17:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:12.127 17:26:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:07:12.127 17:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:12.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:12.127 17:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:12.127 17:26:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:12.127 [2024-07-15 17:26:07.922421] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:07:12.127 [2024-07-15 17:26:07.922667] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:12.694 EAL: TSC is not safe to use in SMP mode 00:07:12.694 EAL: TSC is not invariant 00:07:12.694 [2024-07-15 17:26:08.483516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.952 [2024-07-15 17:26:08.575100] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:12.952 [2024-07-15 17:26:08.577299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.952 [2024-07-15 17:26:08.578054] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.952 [2024-07-15 17:26:08.578068] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:13.211 17:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:13.212 17:26:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:07:13.212 17:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:07:13.212 17:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:07:13.212 17:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:07:13.212 17:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:07:13.212 17:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:13.212 17:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:13.212 17:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:07:13.212 17:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:13.212 17:26:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:07:13.470 malloc1 00:07:13.470 17:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:13.729 [2024-07-15 17:26:09.530723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:13.729 [2024-07-15 17:26:09.530796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:13.729 [2024-07-15 17:26:09.530825] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2dadcb834780 00:07:13.729 [2024-07-15 17:26:09.530834] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:13.729 [2024-07-15 17:26:09.531737] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:13.729 [2024-07-15 17:26:09.531762] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:13.729 pt1 00:07:13.729 17:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:07:13.729 17:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:07:13.729 17:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:07:13.729 17:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:07:13.729 17:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:13.729 17:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:13.729 17:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:07:13.729 17:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:13.729 17:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:07:13.987 malloc2 00:07:13.987 17:26:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:14.245 [2024-07-15 17:26:10.038776] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:14.245 [2024-07-15 17:26:10.038835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:14.245 [2024-07-15 17:26:10.038848] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2dadcb834c80 00:07:14.245 [2024-07-15 17:26:10.038857] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:14.245 [2024-07-15 17:26:10.039512] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:14.245 [2024-07-15 17:26:10.039538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:14.245 pt2 00:07:14.245 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:07:14.245 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:07:14.245 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:07:14.504 [2024-07-15 17:26:10.270803] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:14.504 [2024-07-15 17:26:10.271458] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:14.504 [2024-07-15 17:26:10.271515] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2dadcb834f00 00:07:14.504 [2024-07-15 17:26:10.271521] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:14.504 [2024-07-15 17:26:10.271552] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2dadcb897e20 00:07:14.504 [2024-07-15 17:26:10.271631] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2dadcb834f00 00:07:14.504 [2024-07-15 17:26:10.271636] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2dadcb834f00 00:07:14.504 [2024-07-15 17:26:10.271663] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:14.504 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:14.504 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:14.504 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:14.504 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:14.504 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:14.504 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:14.504 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:14.504 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:14.504 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:14.504 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:14.504 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:14.504 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:14.786 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:14.786 "name": "raid_bdev1", 00:07:14.786 "uuid": "53c2b6df-42cf-11ef-96ac-773515fba644", 00:07:14.786 "strip_size_kb": 64, 00:07:14.786 "state": "online", 00:07:14.786 "raid_level": "raid0", 00:07:14.786 "superblock": true, 00:07:14.786 "num_base_bdevs": 2, 00:07:14.786 "num_base_bdevs_discovered": 2, 00:07:14.786 "num_base_bdevs_operational": 2, 00:07:14.786 "base_bdevs_list": [ 00:07:14.786 { 00:07:14.786 "name": "pt1", 00:07:14.786 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:14.786 "is_configured": true, 00:07:14.786 "data_offset": 2048, 00:07:14.786 "data_size": 63488 00:07:14.786 }, 00:07:14.786 { 00:07:14.786 "name": "pt2", 00:07:14.786 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:14.786 "is_configured": true, 00:07:14.786 "data_offset": 2048, 00:07:14.786 "data_size": 63488 00:07:14.786 } 00:07:14.786 ] 00:07:14.786 }' 00:07:14.786 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:14.786 17:26:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:15.079 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:07:15.079 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:07:15.079 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:15.079 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:15.079 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:15.079 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:15.079 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:15.079 17:26:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:15.338 [2024-07-15 17:26:11.074831] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:15.338 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:15.338 "name": "raid_bdev1", 00:07:15.338 "aliases": [ 00:07:15.338 "53c2b6df-42cf-11ef-96ac-773515fba644" 00:07:15.338 ], 00:07:15.338 "product_name": "Raid Volume", 00:07:15.338 "block_size": 512, 00:07:15.338 "num_blocks": 126976, 00:07:15.338 "uuid": "53c2b6df-42cf-11ef-96ac-773515fba644", 00:07:15.338 "assigned_rate_limits": { 00:07:15.338 "rw_ios_per_sec": 0, 00:07:15.338 "rw_mbytes_per_sec": 0, 00:07:15.338 "r_mbytes_per_sec": 0, 00:07:15.338 "w_mbytes_per_sec": 0 00:07:15.338 }, 00:07:15.338 "claimed": false, 00:07:15.338 "zoned": false, 00:07:15.338 "supported_io_types": { 00:07:15.338 "read": true, 00:07:15.338 "write": true, 00:07:15.338 "unmap": true, 00:07:15.338 "flush": true, 00:07:15.338 "reset": true, 00:07:15.338 "nvme_admin": false, 00:07:15.338 "nvme_io": false, 00:07:15.338 "nvme_io_md": false, 00:07:15.338 "write_zeroes": true, 00:07:15.338 "zcopy": false, 00:07:15.338 "get_zone_info": false, 00:07:15.338 "zone_management": false, 00:07:15.338 "zone_append": false, 00:07:15.338 "compare": false, 00:07:15.338 "compare_and_write": false, 00:07:15.338 "abort": false, 00:07:15.338 "seek_hole": false, 00:07:15.338 "seek_data": false, 00:07:15.338 "copy": false, 00:07:15.338 "nvme_iov_md": false 00:07:15.338 }, 00:07:15.338 "memory_domains": [ 00:07:15.338 { 00:07:15.338 "dma_device_id": "system", 00:07:15.338 "dma_device_type": 1 00:07:15.338 }, 00:07:15.338 { 00:07:15.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.338 "dma_device_type": 2 00:07:15.338 }, 00:07:15.338 { 00:07:15.338 "dma_device_id": "system", 00:07:15.338 "dma_device_type": 1 00:07:15.338 }, 00:07:15.338 { 00:07:15.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.338 "dma_device_type": 2 00:07:15.338 } 00:07:15.338 ], 00:07:15.338 "driver_specific": { 00:07:15.338 "raid": { 00:07:15.338 "uuid": "53c2b6df-42cf-11ef-96ac-773515fba644", 00:07:15.338 "strip_size_kb": 64, 00:07:15.338 "state": "online", 00:07:15.338 "raid_level": "raid0", 00:07:15.338 "superblock": true, 00:07:15.338 "num_base_bdevs": 2, 00:07:15.338 "num_base_bdevs_discovered": 2, 00:07:15.338 "num_base_bdevs_operational": 2, 00:07:15.338 "base_bdevs_list": [ 00:07:15.338 { 00:07:15.338 "name": "pt1", 00:07:15.338 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:15.338 "is_configured": true, 00:07:15.338 "data_offset": 2048, 00:07:15.338 "data_size": 63488 00:07:15.338 }, 00:07:15.338 { 00:07:15.338 "name": "pt2", 00:07:15.338 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:15.338 "is_configured": true, 00:07:15.338 "data_offset": 2048, 00:07:15.338 "data_size": 63488 00:07:15.338 } 00:07:15.338 ] 00:07:15.338 } 00:07:15.338 } 00:07:15.338 }' 00:07:15.338 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:15.338 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:07:15.338 pt2' 00:07:15.338 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:15.338 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:07:15.338 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:15.596 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:15.596 "name": "pt1", 00:07:15.596 "aliases": [ 00:07:15.596 "00000000-0000-0000-0000-000000000001" 00:07:15.596 ], 00:07:15.596 "product_name": "passthru", 00:07:15.596 "block_size": 512, 00:07:15.596 "num_blocks": 65536, 00:07:15.596 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:15.596 "assigned_rate_limits": { 00:07:15.596 "rw_ios_per_sec": 0, 00:07:15.596 "rw_mbytes_per_sec": 0, 00:07:15.596 "r_mbytes_per_sec": 0, 00:07:15.596 "w_mbytes_per_sec": 0 00:07:15.596 }, 00:07:15.596 "claimed": true, 00:07:15.596 "claim_type": "exclusive_write", 00:07:15.596 "zoned": false, 00:07:15.596 "supported_io_types": { 00:07:15.596 "read": true, 00:07:15.596 "write": true, 00:07:15.596 "unmap": true, 00:07:15.596 "flush": true, 00:07:15.596 "reset": true, 00:07:15.596 "nvme_admin": false, 00:07:15.596 "nvme_io": false, 00:07:15.596 "nvme_io_md": false, 00:07:15.596 "write_zeroes": true, 00:07:15.596 "zcopy": true, 00:07:15.596 "get_zone_info": false, 00:07:15.596 "zone_management": false, 00:07:15.596 "zone_append": false, 00:07:15.596 "compare": false, 00:07:15.596 "compare_and_write": false, 00:07:15.596 "abort": true, 00:07:15.596 "seek_hole": false, 00:07:15.596 "seek_data": false, 00:07:15.596 "copy": true, 00:07:15.596 "nvme_iov_md": false 00:07:15.596 }, 00:07:15.596 "memory_domains": [ 00:07:15.596 { 00:07:15.596 "dma_device_id": "system", 00:07:15.596 "dma_device_type": 1 00:07:15.596 }, 00:07:15.596 { 00:07:15.596 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:15.596 "dma_device_type": 2 00:07:15.596 } 00:07:15.596 ], 00:07:15.596 "driver_specific": { 00:07:15.596 "passthru": { 00:07:15.596 "name": "pt1", 00:07:15.596 "base_bdev_name": "malloc1" 00:07:15.596 } 00:07:15.596 } 00:07:15.596 }' 00:07:15.596 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:15.596 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:15.596 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:15.596 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:15.596 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:15.596 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:15.596 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:15.596 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:15.853 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:15.853 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:15.853 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:15.853 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:15.853 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:15.853 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:07:15.853 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:16.110 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:16.110 "name": "pt2", 00:07:16.110 "aliases": [ 00:07:16.110 "00000000-0000-0000-0000-000000000002" 00:07:16.110 ], 00:07:16.110 "product_name": "passthru", 00:07:16.110 "block_size": 512, 00:07:16.110 "num_blocks": 65536, 00:07:16.110 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:16.110 "assigned_rate_limits": { 00:07:16.110 "rw_ios_per_sec": 0, 00:07:16.110 "rw_mbytes_per_sec": 0, 00:07:16.110 "r_mbytes_per_sec": 0, 00:07:16.110 "w_mbytes_per_sec": 0 00:07:16.110 }, 00:07:16.110 "claimed": true, 00:07:16.110 "claim_type": "exclusive_write", 00:07:16.110 "zoned": false, 00:07:16.110 "supported_io_types": { 00:07:16.110 "read": true, 00:07:16.110 "write": true, 00:07:16.110 "unmap": true, 00:07:16.110 "flush": true, 00:07:16.110 "reset": true, 00:07:16.110 "nvme_admin": false, 00:07:16.110 "nvme_io": false, 00:07:16.110 "nvme_io_md": false, 00:07:16.110 "write_zeroes": true, 00:07:16.110 "zcopy": true, 00:07:16.110 "get_zone_info": false, 00:07:16.110 "zone_management": false, 00:07:16.110 "zone_append": false, 00:07:16.110 "compare": false, 00:07:16.110 "compare_and_write": false, 00:07:16.110 "abort": true, 00:07:16.110 "seek_hole": false, 00:07:16.110 "seek_data": false, 00:07:16.110 "copy": true, 00:07:16.110 "nvme_iov_md": false 00:07:16.110 }, 00:07:16.110 "memory_domains": [ 00:07:16.110 { 00:07:16.110 "dma_device_id": "system", 00:07:16.110 "dma_device_type": 1 00:07:16.110 }, 00:07:16.110 { 00:07:16.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:16.110 "dma_device_type": 2 00:07:16.110 } 00:07:16.110 ], 00:07:16.110 "driver_specific": { 00:07:16.110 "passthru": { 00:07:16.110 "name": "pt2", 00:07:16.110 "base_bdev_name": "malloc2" 00:07:16.110 } 00:07:16.110 } 00:07:16.110 }' 00:07:16.110 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:16.110 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:16.110 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:16.110 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:16.110 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:16.110 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:16.110 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:16.110 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:16.110 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:16.110 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:16.110 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:16.110 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:16.110 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:16.110 17:26:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:07:16.367 [2024-07-15 17:26:11.990859] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:16.367 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=53c2b6df-42cf-11ef-96ac-773515fba644 00:07:16.367 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 53c2b6df-42cf-11ef-96ac-773515fba644 ']' 00:07:16.367 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:16.623 [2024-07-15 17:26:12.274819] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:16.623 [2024-07-15 17:26:12.274850] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:16.623 [2024-07-15 17:26:12.274874] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:16.623 [2024-07-15 17:26:12.274886] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:16.623 [2024-07-15 17:26:12.274890] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2dadcb834f00 name raid_bdev1, state offline 00:07:16.623 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:16.623 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:07:16.879 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:07:16.879 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:07:16.879 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:07:16.879 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:07:17.136 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:07:17.136 17:26:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:07:17.393 17:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:07:17.393 17:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:17.651 17:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:07:17.651 17:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:07:17.651 17:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:07:17.651 17:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:07:17.651 17:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:17.651 17:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.651 17:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:17.651 17:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.651 17:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:17.651 17:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.651 17:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:17.651 17:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:17.651 17:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:07:17.907 [2024-07-15 17:26:13.646855] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:17.907 [2024-07-15 17:26:13.647432] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:17.907 [2024-07-15 17:26:13.647457] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:17.907 [2024-07-15 17:26:13.647493] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:17.907 [2024-07-15 17:26:13.647504] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:17.907 [2024-07-15 17:26:13.647509] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2dadcb834c80 name raid_bdev1, state configuring 00:07:17.907 request: 00:07:17.907 { 00:07:17.907 "name": "raid_bdev1", 00:07:17.907 "raid_level": "raid0", 00:07:17.907 "base_bdevs": [ 00:07:17.907 "malloc1", 00:07:17.907 "malloc2" 00:07:17.907 ], 00:07:17.907 "strip_size_kb": 64, 00:07:17.907 "superblock": false, 00:07:17.907 "method": "bdev_raid_create", 00:07:17.907 "req_id": 1 00:07:17.907 } 00:07:17.907 Got JSON-RPC error response 00:07:17.907 response: 00:07:17.907 { 00:07:17.907 "code": -17, 00:07:17.907 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:17.907 } 00:07:17.907 17:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:07:17.907 17:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:17.907 17:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:17.907 17:26:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:17.907 17:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:17.907 17:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:07:18.165 17:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:07:18.165 17:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:07:18.165 17:26:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:18.423 [2024-07-15 17:26:14.238851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:18.423 [2024-07-15 17:26:14.238905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:18.423 [2024-07-15 17:26:14.238918] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2dadcb834780 00:07:18.423 [2024-07-15 17:26:14.238927] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:18.423 [2024-07-15 17:26:14.239568] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:18.423 [2024-07-15 17:26:14.239594] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:18.423 [2024-07-15 17:26:14.239618] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:18.423 [2024-07-15 17:26:14.239630] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:18.423 pt1 00:07:18.423 17:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:18.423 17:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:18.423 17:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:18.423 17:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:18.424 17:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:18.424 17:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:18.424 17:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:18.681 17:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:18.681 17:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:18.681 17:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:18.682 17:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:18.682 17:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:18.940 17:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:18.940 "name": "raid_bdev1", 00:07:18.940 "uuid": "53c2b6df-42cf-11ef-96ac-773515fba644", 00:07:18.940 "strip_size_kb": 64, 00:07:18.940 "state": "configuring", 00:07:18.940 "raid_level": "raid0", 00:07:18.940 "superblock": true, 00:07:18.940 "num_base_bdevs": 2, 00:07:18.940 "num_base_bdevs_discovered": 1, 00:07:18.940 "num_base_bdevs_operational": 2, 00:07:18.940 "base_bdevs_list": [ 00:07:18.940 { 00:07:18.940 "name": "pt1", 00:07:18.940 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:18.940 "is_configured": true, 00:07:18.940 "data_offset": 2048, 00:07:18.940 "data_size": 63488 00:07:18.940 }, 00:07:18.940 { 00:07:18.940 "name": null, 00:07:18.940 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:18.940 "is_configured": false, 00:07:18.940 "data_offset": 2048, 00:07:18.940 "data_size": 63488 00:07:18.940 } 00:07:18.940 ] 00:07:18.940 }' 00:07:18.940 17:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:18.940 17:26:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.198 17:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:07:19.198 17:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:07:19.198 17:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:07:19.198 17:26:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:19.456 [2024-07-15 17:26:15.090870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:19.456 [2024-07-15 17:26:15.090926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:19.456 [2024-07-15 17:26:15.090939] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2dadcb834f00 00:07:19.456 [2024-07-15 17:26:15.090947] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:19.456 [2024-07-15 17:26:15.091063] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:19.456 [2024-07-15 17:26:15.091074] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:19.456 [2024-07-15 17:26:15.091098] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:19.456 [2024-07-15 17:26:15.091106] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:19.456 [2024-07-15 17:26:15.091132] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2dadcb835180 00:07:19.456 [2024-07-15 17:26:15.091136] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:19.456 [2024-07-15 17:26:15.091157] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2dadcb897e20 00:07:19.456 [2024-07-15 17:26:15.091211] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2dadcb835180 00:07:19.456 [2024-07-15 17:26:15.091216] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2dadcb835180 00:07:19.456 [2024-07-15 17:26:15.091237] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:19.456 pt2 00:07:19.456 17:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:07:19.456 17:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:07:19.456 17:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:19.456 17:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:19.456 17:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:19.456 17:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:19.456 17:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:19.456 17:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:19.456 17:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:19.456 17:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:19.456 17:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:19.456 17:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:19.456 17:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:19.456 17:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:19.714 17:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:19.714 "name": "raid_bdev1", 00:07:19.714 "uuid": "53c2b6df-42cf-11ef-96ac-773515fba644", 00:07:19.714 "strip_size_kb": 64, 00:07:19.714 "state": "online", 00:07:19.714 "raid_level": "raid0", 00:07:19.714 "superblock": true, 00:07:19.714 "num_base_bdevs": 2, 00:07:19.714 "num_base_bdevs_discovered": 2, 00:07:19.714 "num_base_bdevs_operational": 2, 00:07:19.714 "base_bdevs_list": [ 00:07:19.714 { 00:07:19.714 "name": "pt1", 00:07:19.714 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:19.714 "is_configured": true, 00:07:19.714 "data_offset": 2048, 00:07:19.714 "data_size": 63488 00:07:19.714 }, 00:07:19.714 { 00:07:19.714 "name": "pt2", 00:07:19.714 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:19.714 "is_configured": true, 00:07:19.714 "data_offset": 2048, 00:07:19.714 "data_size": 63488 00:07:19.714 } 00:07:19.714 ] 00:07:19.714 }' 00:07:19.714 17:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:19.714 17:26:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.972 17:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:07:19.972 17:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:07:19.972 17:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:19.972 17:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:19.972 17:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:19.972 17:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:19.972 17:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:19.972 17:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:20.230 [2024-07-15 17:26:15.910917] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:20.230 17:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:20.230 "name": "raid_bdev1", 00:07:20.230 "aliases": [ 00:07:20.230 "53c2b6df-42cf-11ef-96ac-773515fba644" 00:07:20.230 ], 00:07:20.230 "product_name": "Raid Volume", 00:07:20.230 "block_size": 512, 00:07:20.230 "num_blocks": 126976, 00:07:20.230 "uuid": "53c2b6df-42cf-11ef-96ac-773515fba644", 00:07:20.230 "assigned_rate_limits": { 00:07:20.230 "rw_ios_per_sec": 0, 00:07:20.230 "rw_mbytes_per_sec": 0, 00:07:20.230 "r_mbytes_per_sec": 0, 00:07:20.230 "w_mbytes_per_sec": 0 00:07:20.230 }, 00:07:20.230 "claimed": false, 00:07:20.230 "zoned": false, 00:07:20.230 "supported_io_types": { 00:07:20.230 "read": true, 00:07:20.230 "write": true, 00:07:20.230 "unmap": true, 00:07:20.230 "flush": true, 00:07:20.230 "reset": true, 00:07:20.230 "nvme_admin": false, 00:07:20.230 "nvme_io": false, 00:07:20.230 "nvme_io_md": false, 00:07:20.230 "write_zeroes": true, 00:07:20.230 "zcopy": false, 00:07:20.230 "get_zone_info": false, 00:07:20.230 "zone_management": false, 00:07:20.230 "zone_append": false, 00:07:20.230 "compare": false, 00:07:20.230 "compare_and_write": false, 00:07:20.230 "abort": false, 00:07:20.230 "seek_hole": false, 00:07:20.230 "seek_data": false, 00:07:20.230 "copy": false, 00:07:20.230 "nvme_iov_md": false 00:07:20.230 }, 00:07:20.231 "memory_domains": [ 00:07:20.231 { 00:07:20.231 "dma_device_id": "system", 00:07:20.231 "dma_device_type": 1 00:07:20.231 }, 00:07:20.231 { 00:07:20.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.231 "dma_device_type": 2 00:07:20.231 }, 00:07:20.231 { 00:07:20.231 "dma_device_id": "system", 00:07:20.231 "dma_device_type": 1 00:07:20.231 }, 00:07:20.231 { 00:07:20.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.231 "dma_device_type": 2 00:07:20.231 } 00:07:20.231 ], 00:07:20.231 "driver_specific": { 00:07:20.231 "raid": { 00:07:20.231 "uuid": "53c2b6df-42cf-11ef-96ac-773515fba644", 00:07:20.231 "strip_size_kb": 64, 00:07:20.231 "state": "online", 00:07:20.231 "raid_level": "raid0", 00:07:20.231 "superblock": true, 00:07:20.231 "num_base_bdevs": 2, 00:07:20.231 "num_base_bdevs_discovered": 2, 00:07:20.231 "num_base_bdevs_operational": 2, 00:07:20.231 "base_bdevs_list": [ 00:07:20.231 { 00:07:20.231 "name": "pt1", 00:07:20.231 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:20.231 "is_configured": true, 00:07:20.231 "data_offset": 2048, 00:07:20.231 "data_size": 63488 00:07:20.231 }, 00:07:20.231 { 00:07:20.231 "name": "pt2", 00:07:20.231 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:20.231 "is_configured": true, 00:07:20.231 "data_offset": 2048, 00:07:20.231 "data_size": 63488 00:07:20.231 } 00:07:20.231 ] 00:07:20.231 } 00:07:20.231 } 00:07:20.231 }' 00:07:20.231 17:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:20.231 17:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:07:20.231 pt2' 00:07:20.231 17:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:20.231 17:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:07:20.231 17:26:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:20.489 17:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:20.489 "name": "pt1", 00:07:20.489 "aliases": [ 00:07:20.489 "00000000-0000-0000-0000-000000000001" 00:07:20.489 ], 00:07:20.489 "product_name": "passthru", 00:07:20.489 "block_size": 512, 00:07:20.489 "num_blocks": 65536, 00:07:20.489 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:20.489 "assigned_rate_limits": { 00:07:20.489 "rw_ios_per_sec": 0, 00:07:20.489 "rw_mbytes_per_sec": 0, 00:07:20.489 "r_mbytes_per_sec": 0, 00:07:20.489 "w_mbytes_per_sec": 0 00:07:20.489 }, 00:07:20.489 "claimed": true, 00:07:20.489 "claim_type": "exclusive_write", 00:07:20.489 "zoned": false, 00:07:20.489 "supported_io_types": { 00:07:20.489 "read": true, 00:07:20.489 "write": true, 00:07:20.489 "unmap": true, 00:07:20.489 "flush": true, 00:07:20.489 "reset": true, 00:07:20.489 "nvme_admin": false, 00:07:20.489 "nvme_io": false, 00:07:20.489 "nvme_io_md": false, 00:07:20.489 "write_zeroes": true, 00:07:20.489 "zcopy": true, 00:07:20.489 "get_zone_info": false, 00:07:20.489 "zone_management": false, 00:07:20.489 "zone_append": false, 00:07:20.489 "compare": false, 00:07:20.489 "compare_and_write": false, 00:07:20.489 "abort": true, 00:07:20.489 "seek_hole": false, 00:07:20.489 "seek_data": false, 00:07:20.489 "copy": true, 00:07:20.489 "nvme_iov_md": false 00:07:20.489 }, 00:07:20.489 "memory_domains": [ 00:07:20.489 { 00:07:20.489 "dma_device_id": "system", 00:07:20.489 "dma_device_type": 1 00:07:20.489 }, 00:07:20.489 { 00:07:20.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.489 "dma_device_type": 2 00:07:20.489 } 00:07:20.489 ], 00:07:20.489 "driver_specific": { 00:07:20.489 "passthru": { 00:07:20.489 "name": "pt1", 00:07:20.489 "base_bdev_name": "malloc1" 00:07:20.489 } 00:07:20.489 } 00:07:20.489 }' 00:07:20.489 17:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:20.489 17:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:20.489 17:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:20.489 17:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:20.489 17:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:20.489 17:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:20.489 17:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:20.489 17:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:20.489 17:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:20.489 17:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:20.489 17:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:20.489 17:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:20.489 17:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:20.489 17:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:07:20.489 17:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:20.747 17:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:20.747 "name": "pt2", 00:07:20.747 "aliases": [ 00:07:20.747 "00000000-0000-0000-0000-000000000002" 00:07:20.747 ], 00:07:20.747 "product_name": "passthru", 00:07:20.747 "block_size": 512, 00:07:20.747 "num_blocks": 65536, 00:07:20.747 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:20.747 "assigned_rate_limits": { 00:07:20.747 "rw_ios_per_sec": 0, 00:07:20.747 "rw_mbytes_per_sec": 0, 00:07:20.747 "r_mbytes_per_sec": 0, 00:07:20.747 "w_mbytes_per_sec": 0 00:07:20.747 }, 00:07:20.747 "claimed": true, 00:07:20.747 "claim_type": "exclusive_write", 00:07:20.747 "zoned": false, 00:07:20.747 "supported_io_types": { 00:07:20.747 "read": true, 00:07:20.747 "write": true, 00:07:20.747 "unmap": true, 00:07:20.747 "flush": true, 00:07:20.747 "reset": true, 00:07:20.747 "nvme_admin": false, 00:07:20.747 "nvme_io": false, 00:07:20.747 "nvme_io_md": false, 00:07:20.747 "write_zeroes": true, 00:07:20.747 "zcopy": true, 00:07:20.747 "get_zone_info": false, 00:07:20.747 "zone_management": false, 00:07:20.747 "zone_append": false, 00:07:20.747 "compare": false, 00:07:20.747 "compare_and_write": false, 00:07:20.747 "abort": true, 00:07:20.747 "seek_hole": false, 00:07:20.747 "seek_data": false, 00:07:20.747 "copy": true, 00:07:20.747 "nvme_iov_md": false 00:07:20.747 }, 00:07:20.747 "memory_domains": [ 00:07:20.747 { 00:07:20.747 "dma_device_id": "system", 00:07:20.747 "dma_device_type": 1 00:07:20.747 }, 00:07:20.747 { 00:07:20.747 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.747 "dma_device_type": 2 00:07:20.747 } 00:07:20.747 ], 00:07:20.747 "driver_specific": { 00:07:20.747 "passthru": { 00:07:20.747 "name": "pt2", 00:07:20.747 "base_bdev_name": "malloc2" 00:07:20.747 } 00:07:20.747 } 00:07:20.747 }' 00:07:20.747 17:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:20.747 17:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:20.747 17:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:20.747 17:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:20.747 17:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:20.747 17:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:20.747 17:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:20.747 17:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:20.747 17:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:20.747 17:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:20.747 17:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:21.005 17:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:21.005 17:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:21.005 17:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:07:21.262 [2024-07-15 17:26:16.882930] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:21.262 17:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 53c2b6df-42cf-11ef-96ac-773515fba644 '!=' 53c2b6df-42cf-11ef-96ac-773515fba644 ']' 00:07:21.262 17:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:07:21.262 17:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:21.262 17:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:21.262 17:26:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 49177 00:07:21.262 17:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 49177 ']' 00:07:21.262 17:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 49177 00:07:21.262 17:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:07:21.262 17:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:21.262 17:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 49177 00:07:21.262 17:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:07:21.262 17:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:07:21.262 17:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:07:21.262 killing process with pid 49177 00:07:21.262 17:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 49177' 00:07:21.262 17:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 49177 00:07:21.262 [2024-07-15 17:26:16.912000] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:21.262 [2024-07-15 17:26:16.912025] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:21.262 [2024-07-15 17:26:16.912037] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:21.262 [2024-07-15 17:26:16.912042] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2dadcb835180 name raid_bdev1, state offline 00:07:21.262 17:26:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 49177 00:07:21.262 [2024-07-15 17:26:16.923658] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:21.519 17:26:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:07:21.519 00:07:21.519 real 0m9.185s 00:07:21.519 user 0m16.035s 00:07:21.519 sys 0m1.567s 00:07:21.520 17:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.520 17:26:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.520 ************************************ 00:07:21.520 END TEST raid_superblock_test 00:07:21.520 ************************************ 00:07:21.520 17:26:17 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:07:21.520 17:26:17 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:21.520 17:26:17 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:21.520 17:26:17 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.520 17:26:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:21.520 ************************************ 00:07:21.520 START TEST raid_read_error_test 00:07:21.520 ************************************ 00:07:21.520 17:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 read 00:07:21.520 17:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:07:21.520 17:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:07:21.520 17:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:07:21.520 17:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:07:21.520 17:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:21.520 17:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:07:21.520 17:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:07:21.520 17:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:21.520 17:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:07:21.520 17:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:07:21.520 17:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:21.520 17:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:21.520 17:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:07:21.520 17:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:07:21.520 17:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:07:21.520 17:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:07:21.520 17:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:07:21.520 17:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:07:21.520 17:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:07:21.520 17:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:07:21.520 17:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:07:21.520 17:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:07:21.520 17:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.2BA6TKAiVc 00:07:21.520 17:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=49446 00:07:21.520 17:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 49446 /var/tmp/spdk-raid.sock 00:07:21.520 17:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 49446 ']' 00:07:21.520 17:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:21.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:21.520 17:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.520 17:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:21.520 17:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.520 17:26:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.520 17:26:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:21.520 [2024-07-15 17:26:17.158096] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:07:21.520 [2024-07-15 17:26:17.158360] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:22.084 EAL: TSC is not safe to use in SMP mode 00:07:22.084 EAL: TSC is not invariant 00:07:22.084 [2024-07-15 17:26:17.699091] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.084 [2024-07-15 17:26:17.785299] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:22.084 [2024-07-15 17:26:17.787414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.084 [2024-07-15 17:26:17.788158] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:22.084 [2024-07-15 17:26:17.788170] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:22.650 17:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:22.650 17:26:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:07:22.650 17:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:07:22.650 17:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:22.908 BaseBdev1_malloc 00:07:22.908 17:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:07:23.165 true 00:07:23.165 17:26:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:23.423 [2024-07-15 17:26:19.080047] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:23.423 [2024-07-15 17:26:19.080112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:23.423 [2024-07-15 17:26:19.080139] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x32e0e1434780 00:07:23.423 [2024-07-15 17:26:19.080148] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:23.423 [2024-07-15 17:26:19.080823] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:23.423 [2024-07-15 17:26:19.080845] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:23.423 BaseBdev1 00:07:23.423 17:26:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:07:23.423 17:26:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:23.680 BaseBdev2_malloc 00:07:23.680 17:26:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:07:23.937 true 00:07:23.937 17:26:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:24.195 [2024-07-15 17:26:19.820049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:24.195 [2024-07-15 17:26:19.820095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:24.195 [2024-07-15 17:26:19.820123] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x32e0e1434c80 00:07:24.195 [2024-07-15 17:26:19.820132] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:24.195 [2024-07-15 17:26:19.820810] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:24.195 [2024-07-15 17:26:19.820835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:24.195 BaseBdev2 00:07:24.195 17:26:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:07:24.455 [2024-07-15 17:26:20.060070] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:24.455 [2024-07-15 17:26:20.060688] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:24.455 [2024-07-15 17:26:20.060784] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x32e0e1434f00 00:07:24.455 [2024-07-15 17:26:20.060791] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:24.455 [2024-07-15 17:26:20.060825] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x32e0e14a0e20 00:07:24.455 [2024-07-15 17:26:20.060901] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x32e0e1434f00 00:07:24.455 [2024-07-15 17:26:20.060905] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x32e0e1434f00 00:07:24.455 [2024-07-15 17:26:20.060933] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:24.455 17:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:24.455 17:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:24.455 17:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:24.455 17:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:24.455 17:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:24.455 17:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:24.455 17:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:24.455 17:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:24.455 17:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:24.455 17:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:24.455 17:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:24.455 17:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:24.713 17:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:24.713 "name": "raid_bdev1", 00:07:24.713 "uuid": "59987024-42cf-11ef-96ac-773515fba644", 00:07:24.713 "strip_size_kb": 64, 00:07:24.713 "state": "online", 00:07:24.713 "raid_level": "raid0", 00:07:24.713 "superblock": true, 00:07:24.713 "num_base_bdevs": 2, 00:07:24.713 "num_base_bdevs_discovered": 2, 00:07:24.713 "num_base_bdevs_operational": 2, 00:07:24.713 "base_bdevs_list": [ 00:07:24.713 { 00:07:24.713 "name": "BaseBdev1", 00:07:24.713 "uuid": "bbe7ca5c-eafe-a656-b756-2d37b97d16d8", 00:07:24.713 "is_configured": true, 00:07:24.713 "data_offset": 2048, 00:07:24.713 "data_size": 63488 00:07:24.713 }, 00:07:24.713 { 00:07:24.713 "name": "BaseBdev2", 00:07:24.713 "uuid": "a24fefb2-a83e-7e56-a758-25be3e9c2761", 00:07:24.713 "is_configured": true, 00:07:24.713 "data_offset": 2048, 00:07:24.713 "data_size": 63488 00:07:24.713 } 00:07:24.713 ] 00:07:24.714 }' 00:07:24.714 17:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:24.714 17:26:20 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:24.973 17:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:07:24.973 17:26:20 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:07:24.973 [2024-07-15 17:26:20.756257] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x32e0e14a0ec0 00:07:25.911 17:26:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:26.478 17:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:07:26.478 17:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:26.478 17:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:07:26.478 17:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:26.478 17:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:26.478 17:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:26.478 17:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:26.478 17:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:26.478 17:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:26.478 17:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:26.478 17:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:26.478 17:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:26.478 17:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:26.478 17:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:26.478 17:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:26.478 17:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:26.478 "name": "raid_bdev1", 00:07:26.478 "uuid": "59987024-42cf-11ef-96ac-773515fba644", 00:07:26.478 "strip_size_kb": 64, 00:07:26.478 "state": "online", 00:07:26.478 "raid_level": "raid0", 00:07:26.478 "superblock": true, 00:07:26.478 "num_base_bdevs": 2, 00:07:26.478 "num_base_bdevs_discovered": 2, 00:07:26.478 "num_base_bdevs_operational": 2, 00:07:26.478 "base_bdevs_list": [ 00:07:26.478 { 00:07:26.478 "name": "BaseBdev1", 00:07:26.478 "uuid": "bbe7ca5c-eafe-a656-b756-2d37b97d16d8", 00:07:26.478 "is_configured": true, 00:07:26.478 "data_offset": 2048, 00:07:26.478 "data_size": 63488 00:07:26.478 }, 00:07:26.478 { 00:07:26.478 "name": "BaseBdev2", 00:07:26.478 "uuid": "a24fefb2-a83e-7e56-a758-25be3e9c2761", 00:07:26.478 "is_configured": true, 00:07:26.478 "data_offset": 2048, 00:07:26.478 "data_size": 63488 00:07:26.478 } 00:07:26.478 ] 00:07:26.478 }' 00:07:26.478 17:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:26.478 17:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.045 17:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:27.304 [2024-07-15 17:26:22.898197] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:27.304 [2024-07-15 17:26:22.898225] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:27.304 [2024-07-15 17:26:22.898595] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:27.304 [2024-07-15 17:26:22.898620] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:27.304 [2024-07-15 17:26:22.898627] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:27.304 [2024-07-15 17:26:22.898631] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x32e0e1434f00 name raid_bdev1, state offline 00:07:27.304 0 00:07:27.304 17:26:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 49446 00:07:27.304 17:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 49446 ']' 00:07:27.304 17:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 49446 00:07:27.304 17:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:07:27.304 17:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:27.304 17:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 49446 00:07:27.304 17:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:07:27.304 17:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:07:27.304 17:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:07:27.304 killing process with pid 49446 00:07:27.304 17:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 49446' 00:07:27.304 17:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 49446 00:07:27.304 [2024-07-15 17:26:22.925636] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:27.304 17:26:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 49446 00:07:27.304 [2024-07-15 17:26:22.937262] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:27.304 17:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:07:27.304 17:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.2BA6TKAiVc 00:07:27.304 17:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:07:27.304 17:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.47 00:07:27.304 17:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:07:27.304 17:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:27.304 17:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:27.304 17:26:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.47 != \0\.\0\0 ]] 00:07:27.304 00:07:27.304 real 0m5.987s 00:07:27.304 user 0m9.207s 00:07:27.304 sys 0m1.035s 00:07:27.304 17:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.304 17:26:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.304 ************************************ 00:07:27.304 END TEST raid_read_error_test 00:07:27.304 ************************************ 00:07:27.570 17:26:23 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:07:27.570 17:26:23 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:27.570 17:26:23 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:27.570 17:26:23 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.570 17:26:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:27.570 ************************************ 00:07:27.570 START TEST raid_write_error_test 00:07:27.570 ************************************ 00:07:27.570 17:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 write 00:07:27.570 17:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:07:27.570 17:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:07:27.570 17:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:07:27.570 17:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:07:27.570 17:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:27.570 17:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:07:27.570 17:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:07:27.570 17:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:27.570 17:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:07:27.570 17:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:07:27.570 17:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:27.570 17:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:27.570 17:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:07:27.570 17:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:07:27.570 17:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:07:27.570 17:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:07:27.570 17:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:07:27.570 17:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:07:27.570 17:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:07:27.570 17:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:07:27.570 17:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:07:27.570 17:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:07:27.570 17:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.rXS45P1sZg 00:07:27.570 17:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=49574 00:07:27.570 17:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 49574 /var/tmp/spdk-raid.sock 00:07:27.570 17:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 49574 ']' 00:07:27.570 17:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:27.570 17:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:27.571 17:26:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:27.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:27.571 17:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:27.571 17:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:27.571 17:26:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.571 [2024-07-15 17:26:23.195045] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:07:27.571 [2024-07-15 17:26:23.195329] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:28.140 EAL: TSC is not safe to use in SMP mode 00:07:28.140 EAL: TSC is not invariant 00:07:28.140 [2024-07-15 17:26:23.756735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.140 [2024-07-15 17:26:23.841798] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:28.140 [2024-07-15 17:26:23.843971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.140 [2024-07-15 17:26:23.844757] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:28.140 [2024-07-15 17:26:23.844768] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:28.707 17:26:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:28.707 17:26:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:07:28.707 17:26:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:07:28.707 17:26:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:28.707 BaseBdev1_malloc 00:07:28.966 17:26:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:07:29.224 true 00:07:29.224 17:26:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:29.483 [2024-07-15 17:26:25.061039] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:29.483 [2024-07-15 17:26:25.061107] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:29.483 [2024-07-15 17:26:25.061136] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26ac9c234780 00:07:29.483 [2024-07-15 17:26:25.061145] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:29.483 [2024-07-15 17:26:25.061808] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:29.483 [2024-07-15 17:26:25.061835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:29.483 BaseBdev1 00:07:29.483 17:26:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:07:29.483 17:26:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:29.741 BaseBdev2_malloc 00:07:29.741 17:26:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:07:29.999 true 00:07:29.999 17:26:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:30.258 [2024-07-15 17:26:25.893119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:30.258 [2024-07-15 17:26:25.893177] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:30.258 [2024-07-15 17:26:25.893202] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26ac9c234c80 00:07:30.258 [2024-07-15 17:26:25.893212] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:30.258 [2024-07-15 17:26:25.893914] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:30.258 [2024-07-15 17:26:25.893941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:30.258 BaseBdev2 00:07:30.258 17:26:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:07:30.516 [2024-07-15 17:26:26.189155] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:30.516 [2024-07-15 17:26:26.189790] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:30.516 [2024-07-15 17:26:26.189855] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x26ac9c234f00 00:07:30.516 [2024-07-15 17:26:26.189862] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:30.516 [2024-07-15 17:26:26.189895] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x26ac9c2a0e20 00:07:30.516 [2024-07-15 17:26:26.189969] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x26ac9c234f00 00:07:30.516 [2024-07-15 17:26:26.189974] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x26ac9c234f00 00:07:30.516 [2024-07-15 17:26:26.190002] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:30.516 17:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:30.516 17:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:30.516 17:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:30.516 17:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:30.516 17:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:30.516 17:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:30.516 17:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:30.516 17:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:30.516 17:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:30.516 17:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:30.516 17:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:30.516 17:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:30.774 17:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:30.774 "name": "raid_bdev1", 00:07:30.774 "uuid": "5d3fa988-42cf-11ef-96ac-773515fba644", 00:07:30.774 "strip_size_kb": 64, 00:07:30.774 "state": "online", 00:07:30.774 "raid_level": "raid0", 00:07:30.774 "superblock": true, 00:07:30.774 "num_base_bdevs": 2, 00:07:30.774 "num_base_bdevs_discovered": 2, 00:07:30.774 "num_base_bdevs_operational": 2, 00:07:30.774 "base_bdevs_list": [ 00:07:30.774 { 00:07:30.774 "name": "BaseBdev1", 00:07:30.774 "uuid": "c7e1a30e-bcf0-9f5e-bb66-03e5649ae210", 00:07:30.774 "is_configured": true, 00:07:30.774 "data_offset": 2048, 00:07:30.774 "data_size": 63488 00:07:30.774 }, 00:07:30.774 { 00:07:30.774 "name": "BaseBdev2", 00:07:30.774 "uuid": "fefdc8fa-27b3-fa55-aa67-86b6e67c687f", 00:07:30.774 "is_configured": true, 00:07:30.774 "data_offset": 2048, 00:07:30.774 "data_size": 63488 00:07:30.774 } 00:07:30.774 ] 00:07:30.774 }' 00:07:30.774 17:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:30.774 17:26:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.339 17:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:07:31.339 17:26:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:07:31.339 [2024-07-15 17:26:26.989400] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x26ac9c2a0ec0 00:07:32.273 17:26:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:32.530 17:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:07:32.530 17:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:32.530 17:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:07:32.530 17:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:32.530 17:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:32.530 17:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:32.530 17:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:32.530 17:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:32.530 17:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:32.530 17:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:32.530 17:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:32.530 17:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:32.530 17:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:32.530 17:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:32.530 17:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:32.788 17:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:32.788 "name": "raid_bdev1", 00:07:32.788 "uuid": "5d3fa988-42cf-11ef-96ac-773515fba644", 00:07:32.788 "strip_size_kb": 64, 00:07:32.788 "state": "online", 00:07:32.788 "raid_level": "raid0", 00:07:32.788 "superblock": true, 00:07:32.788 "num_base_bdevs": 2, 00:07:32.788 "num_base_bdevs_discovered": 2, 00:07:32.788 "num_base_bdevs_operational": 2, 00:07:32.788 "base_bdevs_list": [ 00:07:32.788 { 00:07:32.788 "name": "BaseBdev1", 00:07:32.788 "uuid": "c7e1a30e-bcf0-9f5e-bb66-03e5649ae210", 00:07:32.788 "is_configured": true, 00:07:32.788 "data_offset": 2048, 00:07:32.788 "data_size": 63488 00:07:32.788 }, 00:07:32.788 { 00:07:32.788 "name": "BaseBdev2", 00:07:32.788 "uuid": "fefdc8fa-27b3-fa55-aa67-86b6e67c687f", 00:07:32.788 "is_configured": true, 00:07:32.788 "data_offset": 2048, 00:07:32.788 "data_size": 63488 00:07:32.788 } 00:07:32.788 ] 00:07:32.788 }' 00:07:32.788 17:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:32.788 17:26:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.055 17:26:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:33.313 [2024-07-15 17:26:29.114942] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:33.313 [2024-07-15 17:26:29.114971] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:33.313 [2024-07-15 17:26:29.115320] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:33.313 [2024-07-15 17:26:29.115330] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.314 [2024-07-15 17:26:29.115337] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:33.314 [2024-07-15 17:26:29.115341] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x26ac9c234f00 name raid_bdev1, state offline 00:07:33.314 0 00:07:33.314 17:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 49574 00:07:33.314 17:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 49574 ']' 00:07:33.314 17:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 49574 00:07:33.314 17:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:07:33.314 17:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:33.314 17:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:07:33.314 17:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 49574 00:07:33.314 17:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:07:33.314 17:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:07:33.314 killing process with pid 49574 00:07:33.314 17:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 49574' 00:07:33.314 17:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 49574 00:07:33.314 [2024-07-15 17:26:29.143518] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:33.314 17:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 49574 00:07:33.572 [2024-07-15 17:26:29.155113] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:33.572 17:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.rXS45P1sZg 00:07:33.572 17:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:07:33.572 17:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:07:33.573 17:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.47 00:07:33.573 17:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:07:33.573 17:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:33.573 17:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:33.573 17:26:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.47 != \0\.\0\0 ]] 00:07:33.573 00:07:33.573 real 0m6.165s 00:07:33.573 user 0m9.517s 00:07:33.573 sys 0m1.090s 00:07:33.573 17:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.573 17:26:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.573 ************************************ 00:07:33.573 END TEST raid_write_error_test 00:07:33.573 ************************************ 00:07:33.573 17:26:29 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:07:33.573 17:26:29 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:07:33.573 17:26:29 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:33.573 17:26:29 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:33.573 17:26:29 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.573 17:26:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:33.573 ************************************ 00:07:33.573 START TEST raid_state_function_test 00:07:33.573 ************************************ 00:07:33.573 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 false 00:07:33.573 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:07:33.573 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:07:33.573 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:07:33.573 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:07:33.573 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:07:33.573 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:33.573 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:07:33.573 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:33.573 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:33.573 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:07:33.573 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:33.573 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:33.573 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:33.573 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:07:33.573 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:07:33.573 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:07:33.573 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:07:33.573 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:07:33.573 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:07:33.573 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:07:33.573 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:07:33.573 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:07:33.573 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:07:33.573 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=49700 00:07:33.573 Process raid pid: 49700 00:07:33.573 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 49700' 00:07:33.573 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 49700 /var/tmp/spdk-raid.sock 00:07:33.573 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 49700 ']' 00:07:33.573 17:26:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:33.573 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:33.573 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:33.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:33.573 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:33.573 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:33.573 17:26:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.829 [2024-07-15 17:26:29.405914] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:07:33.829 [2024-07-15 17:26:29.406148] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:34.395 EAL: TSC is not safe to use in SMP mode 00:07:34.395 EAL: TSC is not invariant 00:07:34.395 [2024-07-15 17:26:29.946852] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.395 [2024-07-15 17:26:30.034950] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:34.395 [2024-07-15 17:26:30.037288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.395 [2024-07-15 17:26:30.038100] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:34.395 [2024-07-15 17:26:30.038118] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:34.665 17:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:34.665 17:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:07:34.665 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:34.924 [2024-07-15 17:26:30.625715] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:34.924 [2024-07-15 17:26:30.625770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:34.924 [2024-07-15 17:26:30.625775] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:34.924 [2024-07-15 17:26:30.625784] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:34.924 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:34.924 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:34.924 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:34.924 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:34.924 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:34.924 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:34.924 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:34.924 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:34.924 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:34.924 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:34.924 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:34.924 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:35.183 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:35.183 "name": "Existed_Raid", 00:07:35.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.183 "strip_size_kb": 64, 00:07:35.183 "state": "configuring", 00:07:35.183 "raid_level": "concat", 00:07:35.183 "superblock": false, 00:07:35.183 "num_base_bdevs": 2, 00:07:35.183 "num_base_bdevs_discovered": 0, 00:07:35.183 "num_base_bdevs_operational": 2, 00:07:35.183 "base_bdevs_list": [ 00:07:35.183 { 00:07:35.183 "name": "BaseBdev1", 00:07:35.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.183 "is_configured": false, 00:07:35.183 "data_offset": 0, 00:07:35.183 "data_size": 0 00:07:35.183 }, 00:07:35.183 { 00:07:35.183 "name": "BaseBdev2", 00:07:35.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:35.183 "is_configured": false, 00:07:35.183 "data_offset": 0, 00:07:35.183 "data_size": 0 00:07:35.183 } 00:07:35.183 ] 00:07:35.183 }' 00:07:35.183 17:26:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:35.183 17:26:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:35.441 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:35.699 [2024-07-15 17:26:31.473710] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:35.699 [2024-07-15 17:26:31.473733] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x16bbcfc34500 name Existed_Raid, state configuring 00:07:35.699 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:35.957 [2024-07-15 17:26:31.733723] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:35.957 [2024-07-15 17:26:31.733773] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:35.957 [2024-07-15 17:26:31.733778] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:35.957 [2024-07-15 17:26:31.733787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:35.957 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:36.215 [2024-07-15 17:26:31.970718] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:36.215 BaseBdev1 00:07:36.215 17:26:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:07:36.215 17:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:07:36.215 17:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:36.215 17:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:07:36.215 17:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:36.215 17:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:36.215 17:26:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:36.473 17:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:36.731 [ 00:07:36.731 { 00:07:36.731 "name": "BaseBdev1", 00:07:36.731 "aliases": [ 00:07:36.731 "60b1b5d2-42cf-11ef-96ac-773515fba644" 00:07:36.731 ], 00:07:36.731 "product_name": "Malloc disk", 00:07:36.731 "block_size": 512, 00:07:36.731 "num_blocks": 65536, 00:07:36.731 "uuid": "60b1b5d2-42cf-11ef-96ac-773515fba644", 00:07:36.731 "assigned_rate_limits": { 00:07:36.731 "rw_ios_per_sec": 0, 00:07:36.731 "rw_mbytes_per_sec": 0, 00:07:36.731 "r_mbytes_per_sec": 0, 00:07:36.731 "w_mbytes_per_sec": 0 00:07:36.731 }, 00:07:36.731 "claimed": true, 00:07:36.731 "claim_type": "exclusive_write", 00:07:36.731 "zoned": false, 00:07:36.731 "supported_io_types": { 00:07:36.731 "read": true, 00:07:36.731 "write": true, 00:07:36.731 "unmap": true, 00:07:36.731 "flush": true, 00:07:36.731 "reset": true, 00:07:36.731 "nvme_admin": false, 00:07:36.731 "nvme_io": false, 00:07:36.731 "nvme_io_md": false, 00:07:36.731 "write_zeroes": true, 00:07:36.731 "zcopy": true, 00:07:36.731 "get_zone_info": false, 00:07:36.731 "zone_management": false, 00:07:36.731 "zone_append": false, 00:07:36.731 "compare": false, 00:07:36.731 "compare_and_write": false, 00:07:36.731 "abort": true, 00:07:36.731 "seek_hole": false, 00:07:36.731 "seek_data": false, 00:07:36.731 "copy": true, 00:07:36.731 "nvme_iov_md": false 00:07:36.731 }, 00:07:36.731 "memory_domains": [ 00:07:36.731 { 00:07:36.731 "dma_device_id": "system", 00:07:36.731 "dma_device_type": 1 00:07:36.731 }, 00:07:36.731 { 00:07:36.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.731 "dma_device_type": 2 00:07:36.731 } 00:07:36.731 ], 00:07:36.731 "driver_specific": {} 00:07:36.731 } 00:07:36.731 ] 00:07:36.731 17:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:07:36.731 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:36.731 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:36.731 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:36.731 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:36.731 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:36.731 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:36.731 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:36.731 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:36.731 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:36.731 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:36.731 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:36.731 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:36.990 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:36.990 "name": "Existed_Raid", 00:07:36.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.990 "strip_size_kb": 64, 00:07:36.990 "state": "configuring", 00:07:36.990 "raid_level": "concat", 00:07:36.990 "superblock": false, 00:07:36.990 "num_base_bdevs": 2, 00:07:36.990 "num_base_bdevs_discovered": 1, 00:07:36.990 "num_base_bdevs_operational": 2, 00:07:36.990 "base_bdevs_list": [ 00:07:36.990 { 00:07:36.990 "name": "BaseBdev1", 00:07:36.990 "uuid": "60b1b5d2-42cf-11ef-96ac-773515fba644", 00:07:36.990 "is_configured": true, 00:07:36.990 "data_offset": 0, 00:07:36.990 "data_size": 65536 00:07:36.990 }, 00:07:36.990 { 00:07:36.990 "name": "BaseBdev2", 00:07:36.990 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:36.990 "is_configured": false, 00:07:36.990 "data_offset": 0, 00:07:36.990 "data_size": 0 00:07:36.990 } 00:07:36.990 ] 00:07:36.990 }' 00:07:36.990 17:26:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:36.990 17:26:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.248 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:37.506 [2024-07-15 17:26:33.285779] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:37.506 [2024-07-15 17:26:33.285828] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x16bbcfc34500 name Existed_Raid, state configuring 00:07:37.506 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:37.764 [2024-07-15 17:26:33.577846] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:37.764 [2024-07-15 17:26:33.578716] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:37.764 [2024-07-15 17:26:33.578787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:38.022 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:07:38.022 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:38.022 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:38.022 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:38.022 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:38.022 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:38.022 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:38.022 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:38.022 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:38.022 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:38.022 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:38.022 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:38.022 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:38.022 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:38.280 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:38.280 "name": "Existed_Raid", 00:07:38.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.280 "strip_size_kb": 64, 00:07:38.280 "state": "configuring", 00:07:38.280 "raid_level": "concat", 00:07:38.280 "superblock": false, 00:07:38.280 "num_base_bdevs": 2, 00:07:38.280 "num_base_bdevs_discovered": 1, 00:07:38.280 "num_base_bdevs_operational": 2, 00:07:38.280 "base_bdevs_list": [ 00:07:38.280 { 00:07:38.280 "name": "BaseBdev1", 00:07:38.280 "uuid": "60b1b5d2-42cf-11ef-96ac-773515fba644", 00:07:38.280 "is_configured": true, 00:07:38.280 "data_offset": 0, 00:07:38.280 "data_size": 65536 00:07:38.280 }, 00:07:38.280 { 00:07:38.280 "name": "BaseBdev2", 00:07:38.280 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:38.280 "is_configured": false, 00:07:38.280 "data_offset": 0, 00:07:38.280 "data_size": 0 00:07:38.280 } 00:07:38.280 ] 00:07:38.280 }' 00:07:38.280 17:26:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:38.280 17:26:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.539 17:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:38.797 [2024-07-15 17:26:34.538053] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:38.797 [2024-07-15 17:26:34.538082] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x16bbcfc34a00 00:07:38.797 [2024-07-15 17:26:34.538087] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:38.797 [2024-07-15 17:26:34.538110] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x16bbcfc97e20 00:07:38.797 [2024-07-15 17:26:34.538199] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x16bbcfc34a00 00:07:38.797 [2024-07-15 17:26:34.538203] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x16bbcfc34a00 00:07:38.797 [2024-07-15 17:26:34.538236] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:38.797 BaseBdev2 00:07:38.797 17:26:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:07:38.797 17:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:07:38.797 17:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:38.797 17:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:07:38.797 17:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:38.797 17:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:38.797 17:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:39.053 17:26:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:39.310 [ 00:07:39.310 { 00:07:39.310 "name": "BaseBdev2", 00:07:39.310 "aliases": [ 00:07:39.310 "62399576-42cf-11ef-96ac-773515fba644" 00:07:39.310 ], 00:07:39.310 "product_name": "Malloc disk", 00:07:39.310 "block_size": 512, 00:07:39.310 "num_blocks": 65536, 00:07:39.310 "uuid": "62399576-42cf-11ef-96ac-773515fba644", 00:07:39.310 "assigned_rate_limits": { 00:07:39.310 "rw_ios_per_sec": 0, 00:07:39.310 "rw_mbytes_per_sec": 0, 00:07:39.310 "r_mbytes_per_sec": 0, 00:07:39.310 "w_mbytes_per_sec": 0 00:07:39.310 }, 00:07:39.310 "claimed": true, 00:07:39.310 "claim_type": "exclusive_write", 00:07:39.310 "zoned": false, 00:07:39.310 "supported_io_types": { 00:07:39.310 "read": true, 00:07:39.310 "write": true, 00:07:39.310 "unmap": true, 00:07:39.310 "flush": true, 00:07:39.310 "reset": true, 00:07:39.310 "nvme_admin": false, 00:07:39.310 "nvme_io": false, 00:07:39.310 "nvme_io_md": false, 00:07:39.310 "write_zeroes": true, 00:07:39.310 "zcopy": true, 00:07:39.310 "get_zone_info": false, 00:07:39.310 "zone_management": false, 00:07:39.310 "zone_append": false, 00:07:39.310 "compare": false, 00:07:39.310 "compare_and_write": false, 00:07:39.310 "abort": true, 00:07:39.310 "seek_hole": false, 00:07:39.310 "seek_data": false, 00:07:39.310 "copy": true, 00:07:39.310 "nvme_iov_md": false 00:07:39.310 }, 00:07:39.310 "memory_domains": [ 00:07:39.310 { 00:07:39.310 "dma_device_id": "system", 00:07:39.310 "dma_device_type": 1 00:07:39.310 }, 00:07:39.310 { 00:07:39.310 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:39.310 "dma_device_type": 2 00:07:39.310 } 00:07:39.310 ], 00:07:39.310 "driver_specific": {} 00:07:39.310 } 00:07:39.310 ] 00:07:39.310 17:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:07:39.310 17:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:07:39.310 17:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:39.310 17:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:39.310 17:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:39.310 17:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:39.310 17:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:39.310 17:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:39.310 17:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:39.310 17:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:39.310 17:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:39.310 17:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:39.310 17:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:39.310 17:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:39.310 17:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:39.568 17:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:39.568 "name": "Existed_Raid", 00:07:39.568 "uuid": "62399c3f-42cf-11ef-96ac-773515fba644", 00:07:39.568 "strip_size_kb": 64, 00:07:39.568 "state": "online", 00:07:39.568 "raid_level": "concat", 00:07:39.568 "superblock": false, 00:07:39.568 "num_base_bdevs": 2, 00:07:39.568 "num_base_bdevs_discovered": 2, 00:07:39.568 "num_base_bdevs_operational": 2, 00:07:39.568 "base_bdevs_list": [ 00:07:39.568 { 00:07:39.568 "name": "BaseBdev1", 00:07:39.568 "uuid": "60b1b5d2-42cf-11ef-96ac-773515fba644", 00:07:39.568 "is_configured": true, 00:07:39.568 "data_offset": 0, 00:07:39.568 "data_size": 65536 00:07:39.568 }, 00:07:39.568 { 00:07:39.568 "name": "BaseBdev2", 00:07:39.568 "uuid": "62399576-42cf-11ef-96ac-773515fba644", 00:07:39.568 "is_configured": true, 00:07:39.568 "data_offset": 0, 00:07:39.568 "data_size": 65536 00:07:39.568 } 00:07:39.568 ] 00:07:39.568 }' 00:07:39.568 17:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:39.568 17:26:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:40.175 17:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:07:40.175 17:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:07:40.175 17:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:40.175 17:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:40.175 17:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:40.175 17:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:40.175 17:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:07:40.175 17:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:40.175 [2024-07-15 17:26:35.897984] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:40.175 17:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:40.175 "name": "Existed_Raid", 00:07:40.175 "aliases": [ 00:07:40.175 "62399c3f-42cf-11ef-96ac-773515fba644" 00:07:40.175 ], 00:07:40.175 "product_name": "Raid Volume", 00:07:40.175 "block_size": 512, 00:07:40.175 "num_blocks": 131072, 00:07:40.175 "uuid": "62399c3f-42cf-11ef-96ac-773515fba644", 00:07:40.175 "assigned_rate_limits": { 00:07:40.175 "rw_ios_per_sec": 0, 00:07:40.175 "rw_mbytes_per_sec": 0, 00:07:40.175 "r_mbytes_per_sec": 0, 00:07:40.175 "w_mbytes_per_sec": 0 00:07:40.175 }, 00:07:40.175 "claimed": false, 00:07:40.175 "zoned": false, 00:07:40.175 "supported_io_types": { 00:07:40.175 "read": true, 00:07:40.175 "write": true, 00:07:40.175 "unmap": true, 00:07:40.175 "flush": true, 00:07:40.175 "reset": true, 00:07:40.175 "nvme_admin": false, 00:07:40.175 "nvme_io": false, 00:07:40.175 "nvme_io_md": false, 00:07:40.175 "write_zeroes": true, 00:07:40.175 "zcopy": false, 00:07:40.175 "get_zone_info": false, 00:07:40.175 "zone_management": false, 00:07:40.175 "zone_append": false, 00:07:40.175 "compare": false, 00:07:40.175 "compare_and_write": false, 00:07:40.175 "abort": false, 00:07:40.175 "seek_hole": false, 00:07:40.175 "seek_data": false, 00:07:40.175 "copy": false, 00:07:40.175 "nvme_iov_md": false 00:07:40.175 }, 00:07:40.175 "memory_domains": [ 00:07:40.175 { 00:07:40.175 "dma_device_id": "system", 00:07:40.175 "dma_device_type": 1 00:07:40.175 }, 00:07:40.175 { 00:07:40.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.175 "dma_device_type": 2 00:07:40.175 }, 00:07:40.175 { 00:07:40.175 "dma_device_id": "system", 00:07:40.175 "dma_device_type": 1 00:07:40.175 }, 00:07:40.175 { 00:07:40.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.175 "dma_device_type": 2 00:07:40.175 } 00:07:40.175 ], 00:07:40.175 "driver_specific": { 00:07:40.175 "raid": { 00:07:40.175 "uuid": "62399c3f-42cf-11ef-96ac-773515fba644", 00:07:40.175 "strip_size_kb": 64, 00:07:40.175 "state": "online", 00:07:40.175 "raid_level": "concat", 00:07:40.175 "superblock": false, 00:07:40.175 "num_base_bdevs": 2, 00:07:40.175 "num_base_bdevs_discovered": 2, 00:07:40.175 "num_base_bdevs_operational": 2, 00:07:40.175 "base_bdevs_list": [ 00:07:40.175 { 00:07:40.175 "name": "BaseBdev1", 00:07:40.175 "uuid": "60b1b5d2-42cf-11ef-96ac-773515fba644", 00:07:40.175 "is_configured": true, 00:07:40.175 "data_offset": 0, 00:07:40.175 "data_size": 65536 00:07:40.175 }, 00:07:40.175 { 00:07:40.175 "name": "BaseBdev2", 00:07:40.175 "uuid": "62399576-42cf-11ef-96ac-773515fba644", 00:07:40.175 "is_configured": true, 00:07:40.175 "data_offset": 0, 00:07:40.175 "data_size": 65536 00:07:40.175 } 00:07:40.175 ] 00:07:40.175 } 00:07:40.175 } 00:07:40.175 }' 00:07:40.175 17:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:40.175 17:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:07:40.175 BaseBdev2' 00:07:40.175 17:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:40.175 17:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:07:40.175 17:26:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:40.470 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:40.470 "name": "BaseBdev1", 00:07:40.470 "aliases": [ 00:07:40.470 "60b1b5d2-42cf-11ef-96ac-773515fba644" 00:07:40.470 ], 00:07:40.470 "product_name": "Malloc disk", 00:07:40.470 "block_size": 512, 00:07:40.470 "num_blocks": 65536, 00:07:40.470 "uuid": "60b1b5d2-42cf-11ef-96ac-773515fba644", 00:07:40.470 "assigned_rate_limits": { 00:07:40.470 "rw_ios_per_sec": 0, 00:07:40.470 "rw_mbytes_per_sec": 0, 00:07:40.470 "r_mbytes_per_sec": 0, 00:07:40.470 "w_mbytes_per_sec": 0 00:07:40.470 }, 00:07:40.470 "claimed": true, 00:07:40.470 "claim_type": "exclusive_write", 00:07:40.470 "zoned": false, 00:07:40.470 "supported_io_types": { 00:07:40.470 "read": true, 00:07:40.470 "write": true, 00:07:40.470 "unmap": true, 00:07:40.470 "flush": true, 00:07:40.470 "reset": true, 00:07:40.470 "nvme_admin": false, 00:07:40.470 "nvme_io": false, 00:07:40.470 "nvme_io_md": false, 00:07:40.470 "write_zeroes": true, 00:07:40.470 "zcopy": true, 00:07:40.470 "get_zone_info": false, 00:07:40.470 "zone_management": false, 00:07:40.470 "zone_append": false, 00:07:40.470 "compare": false, 00:07:40.470 "compare_and_write": false, 00:07:40.470 "abort": true, 00:07:40.470 "seek_hole": false, 00:07:40.470 "seek_data": false, 00:07:40.470 "copy": true, 00:07:40.470 "nvme_iov_md": false 00:07:40.470 }, 00:07:40.470 "memory_domains": [ 00:07:40.470 { 00:07:40.470 "dma_device_id": "system", 00:07:40.470 "dma_device_type": 1 00:07:40.470 }, 00:07:40.470 { 00:07:40.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:40.470 "dma_device_type": 2 00:07:40.470 } 00:07:40.470 ], 00:07:40.470 "driver_specific": {} 00:07:40.470 }' 00:07:40.470 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:40.470 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:40.470 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:40.470 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:40.470 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:40.470 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:40.470 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:40.470 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:40.470 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:40.470 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:40.470 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:40.470 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:40.470 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:40.470 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:07:40.470 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:41.037 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:41.037 "name": "BaseBdev2", 00:07:41.037 "aliases": [ 00:07:41.037 "62399576-42cf-11ef-96ac-773515fba644" 00:07:41.037 ], 00:07:41.037 "product_name": "Malloc disk", 00:07:41.037 "block_size": 512, 00:07:41.037 "num_blocks": 65536, 00:07:41.037 "uuid": "62399576-42cf-11ef-96ac-773515fba644", 00:07:41.037 "assigned_rate_limits": { 00:07:41.037 "rw_ios_per_sec": 0, 00:07:41.037 "rw_mbytes_per_sec": 0, 00:07:41.037 "r_mbytes_per_sec": 0, 00:07:41.037 "w_mbytes_per_sec": 0 00:07:41.037 }, 00:07:41.037 "claimed": true, 00:07:41.037 "claim_type": "exclusive_write", 00:07:41.037 "zoned": false, 00:07:41.037 "supported_io_types": { 00:07:41.037 "read": true, 00:07:41.037 "write": true, 00:07:41.037 "unmap": true, 00:07:41.037 "flush": true, 00:07:41.037 "reset": true, 00:07:41.037 "nvme_admin": false, 00:07:41.037 "nvme_io": false, 00:07:41.037 "nvme_io_md": false, 00:07:41.037 "write_zeroes": true, 00:07:41.037 "zcopy": true, 00:07:41.037 "get_zone_info": false, 00:07:41.037 "zone_management": false, 00:07:41.037 "zone_append": false, 00:07:41.037 "compare": false, 00:07:41.037 "compare_and_write": false, 00:07:41.037 "abort": true, 00:07:41.037 "seek_hole": false, 00:07:41.037 "seek_data": false, 00:07:41.037 "copy": true, 00:07:41.037 "nvme_iov_md": false 00:07:41.037 }, 00:07:41.037 "memory_domains": [ 00:07:41.037 { 00:07:41.037 "dma_device_id": "system", 00:07:41.037 "dma_device_type": 1 00:07:41.037 }, 00:07:41.037 { 00:07:41.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:41.037 "dma_device_type": 2 00:07:41.037 } 00:07:41.037 ], 00:07:41.037 "driver_specific": {} 00:07:41.037 }' 00:07:41.037 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:41.037 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:41.037 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:41.037 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:41.037 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:41.037 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:41.037 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:41.037 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:41.037 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:41.037 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:41.037 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:41.037 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:41.037 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:41.294 [2024-07-15 17:26:36.909972] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:41.294 [2024-07-15 17:26:36.909997] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:41.294 [2024-07-15 17:26:36.910021] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:41.294 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:07:41.294 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:07:41.294 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:41.294 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:41.294 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:07:41.294 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:41.294 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:41.294 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:07:41.294 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:41.294 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:41.294 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:07:41.294 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:41.294 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:41.294 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:41.294 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:41.294 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:41.294 17:26:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:41.553 17:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:41.553 "name": "Existed_Raid", 00:07:41.553 "uuid": "62399c3f-42cf-11ef-96ac-773515fba644", 00:07:41.553 "strip_size_kb": 64, 00:07:41.553 "state": "offline", 00:07:41.553 "raid_level": "concat", 00:07:41.553 "superblock": false, 00:07:41.553 "num_base_bdevs": 2, 00:07:41.553 "num_base_bdevs_discovered": 1, 00:07:41.553 "num_base_bdevs_operational": 1, 00:07:41.553 "base_bdevs_list": [ 00:07:41.553 { 00:07:41.553 "name": null, 00:07:41.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:41.553 "is_configured": false, 00:07:41.553 "data_offset": 0, 00:07:41.553 "data_size": 65536 00:07:41.553 }, 00:07:41.553 { 00:07:41.553 "name": "BaseBdev2", 00:07:41.553 "uuid": "62399576-42cf-11ef-96ac-773515fba644", 00:07:41.553 "is_configured": true, 00:07:41.553 "data_offset": 0, 00:07:41.553 "data_size": 65536 00:07:41.553 } 00:07:41.553 ] 00:07:41.553 }' 00:07:41.553 17:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:41.553 17:26:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:41.812 17:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:07:41.812 17:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:41.812 17:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:41.812 17:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:07:42.070 17:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:07:42.070 17:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:42.070 17:26:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:42.327 [2024-07-15 17:26:38.023857] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:42.327 [2024-07-15 17:26:38.023889] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x16bbcfc34a00 name Existed_Raid, state offline 00:07:42.327 17:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:07:42.327 17:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:42.327 17:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:07:42.327 17:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:42.585 17:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:07:42.585 17:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:07:42.585 17:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:07:42.585 17:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 49700 00:07:42.585 17:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 49700 ']' 00:07:42.585 17:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 49700 00:07:42.585 17:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:07:42.585 17:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:42.585 17:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 49700 00:07:42.585 17:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:07:42.585 17:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:07:42.585 killing process with pid 49700 00:07:42.585 17:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:07:42.585 17:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 49700' 00:07:42.585 17:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 49700 00:07:42.585 [2024-07-15 17:26:38.345608] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:42.585 [2024-07-15 17:26:38.345641] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:42.585 17:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 49700 00:07:42.842 17:26:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:07:42.842 00:07:42.842 real 0m9.130s 00:07:42.842 user 0m15.912s 00:07:42.842 sys 0m1.588s 00:07:42.842 17:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.842 17:26:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.842 ************************************ 00:07:42.842 END TEST raid_state_function_test 00:07:42.842 ************************************ 00:07:42.842 17:26:38 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:07:42.842 17:26:38 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:42.842 17:26:38 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:42.842 17:26:38 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.842 17:26:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:42.842 ************************************ 00:07:42.842 START TEST raid_state_function_test_sb 00:07:42.842 ************************************ 00:07:42.842 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 true 00:07:42.842 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:07:42.842 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:07:42.842 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:07:42.842 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:07:42.842 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:07:42.842 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:42.842 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:07:42.842 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:42.842 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:42.842 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:07:42.842 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:42.842 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:42.842 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:42.842 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:07:42.842 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:07:42.842 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:07:42.842 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:07:42.842 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:07:42.842 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:07:42.842 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:07:42.843 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:07:42.843 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:07:42.843 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:07:42.843 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=49971 00:07:42.843 Process raid pid: 49971 00:07:42.843 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 49971' 00:07:42.843 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 49971 /var/tmp/spdk-raid.sock 00:07:42.843 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 49971 ']' 00:07:42.843 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:42.843 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:42.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:42.843 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:42.843 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:42.843 17:26:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:42.843 17:26:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:42.843 [2024-07-15 17:26:38.578278] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:07:42.843 [2024-07-15 17:26:38.578451] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:43.406 EAL: TSC is not safe to use in SMP mode 00:07:43.406 EAL: TSC is not invariant 00:07:43.406 [2024-07-15 17:26:39.108742] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.406 [2024-07-15 17:26:39.196706] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:43.406 [2024-07-15 17:26:39.198799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.406 [2024-07-15 17:26:39.199581] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.406 [2024-07-15 17:26:39.199596] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:43.971 17:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:43.971 17:26:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:07:43.971 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:44.265 [2024-07-15 17:26:39.859260] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:44.265 [2024-07-15 17:26:39.859303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:44.265 [2024-07-15 17:26:39.859308] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:44.265 [2024-07-15 17:26:39.859323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:44.265 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:44.265 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:44.265 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:44.265 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:44.265 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:44.265 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:44.265 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:44.265 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:44.265 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:44.265 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:44.265 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:44.265 17:26:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:44.523 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:44.523 "name": "Existed_Raid", 00:07:44.523 "uuid": "65658e21-42cf-11ef-96ac-773515fba644", 00:07:44.523 "strip_size_kb": 64, 00:07:44.523 "state": "configuring", 00:07:44.523 "raid_level": "concat", 00:07:44.523 "superblock": true, 00:07:44.523 "num_base_bdevs": 2, 00:07:44.523 "num_base_bdevs_discovered": 0, 00:07:44.523 "num_base_bdevs_operational": 2, 00:07:44.523 "base_bdevs_list": [ 00:07:44.523 { 00:07:44.523 "name": "BaseBdev1", 00:07:44.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.523 "is_configured": false, 00:07:44.523 "data_offset": 0, 00:07:44.523 "data_size": 0 00:07:44.523 }, 00:07:44.523 { 00:07:44.523 "name": "BaseBdev2", 00:07:44.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:44.523 "is_configured": false, 00:07:44.523 "data_offset": 0, 00:07:44.523 "data_size": 0 00:07:44.523 } 00:07:44.523 ] 00:07:44.523 }' 00:07:44.523 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:44.523 17:26:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:44.781 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:45.038 [2024-07-15 17:26:40.759259] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:45.038 [2024-07-15 17:26:40.759284] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x124602234500 name Existed_Raid, state configuring 00:07:45.038 17:26:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:45.296 [2024-07-15 17:26:41.011303] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:45.296 [2024-07-15 17:26:41.011355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:45.296 [2024-07-15 17:26:41.011360] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:45.296 [2024-07-15 17:26:41.011369] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:45.296 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:45.553 [2024-07-15 17:26:41.332365] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:45.553 BaseBdev1 00:07:45.553 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:07:45.553 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:07:45.553 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:45.553 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:07:45.553 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:45.553 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:45.553 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:45.810 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:46.123 [ 00:07:46.123 { 00:07:46.123 "name": "BaseBdev1", 00:07:46.123 "aliases": [ 00:07:46.123 "66462d12-42cf-11ef-96ac-773515fba644" 00:07:46.123 ], 00:07:46.123 "product_name": "Malloc disk", 00:07:46.123 "block_size": 512, 00:07:46.123 "num_blocks": 65536, 00:07:46.123 "uuid": "66462d12-42cf-11ef-96ac-773515fba644", 00:07:46.123 "assigned_rate_limits": { 00:07:46.123 "rw_ios_per_sec": 0, 00:07:46.123 "rw_mbytes_per_sec": 0, 00:07:46.123 "r_mbytes_per_sec": 0, 00:07:46.123 "w_mbytes_per_sec": 0 00:07:46.123 }, 00:07:46.123 "claimed": true, 00:07:46.123 "claim_type": "exclusive_write", 00:07:46.123 "zoned": false, 00:07:46.123 "supported_io_types": { 00:07:46.123 "read": true, 00:07:46.123 "write": true, 00:07:46.123 "unmap": true, 00:07:46.123 "flush": true, 00:07:46.123 "reset": true, 00:07:46.123 "nvme_admin": false, 00:07:46.123 "nvme_io": false, 00:07:46.123 "nvme_io_md": false, 00:07:46.123 "write_zeroes": true, 00:07:46.123 "zcopy": true, 00:07:46.123 "get_zone_info": false, 00:07:46.123 "zone_management": false, 00:07:46.123 "zone_append": false, 00:07:46.123 "compare": false, 00:07:46.123 "compare_and_write": false, 00:07:46.123 "abort": true, 00:07:46.123 "seek_hole": false, 00:07:46.123 "seek_data": false, 00:07:46.123 "copy": true, 00:07:46.123 "nvme_iov_md": false 00:07:46.123 }, 00:07:46.123 "memory_domains": [ 00:07:46.123 { 00:07:46.123 "dma_device_id": "system", 00:07:46.123 "dma_device_type": 1 00:07:46.123 }, 00:07:46.123 { 00:07:46.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:46.123 "dma_device_type": 2 00:07:46.123 } 00:07:46.123 ], 00:07:46.123 "driver_specific": {} 00:07:46.123 } 00:07:46.123 ] 00:07:46.123 17:26:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:07:46.123 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:46.123 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:46.123 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:46.123 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:46.123 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:46.123 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:46.123 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:46.123 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:46.123 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:46.123 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:46.123 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:46.123 17:26:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:46.382 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:46.382 "name": "Existed_Raid", 00:07:46.382 "uuid": "661557bf-42cf-11ef-96ac-773515fba644", 00:07:46.382 "strip_size_kb": 64, 00:07:46.382 "state": "configuring", 00:07:46.382 "raid_level": "concat", 00:07:46.382 "superblock": true, 00:07:46.382 "num_base_bdevs": 2, 00:07:46.382 "num_base_bdevs_discovered": 1, 00:07:46.382 "num_base_bdevs_operational": 2, 00:07:46.382 "base_bdevs_list": [ 00:07:46.382 { 00:07:46.382 "name": "BaseBdev1", 00:07:46.382 "uuid": "66462d12-42cf-11ef-96ac-773515fba644", 00:07:46.382 "is_configured": true, 00:07:46.382 "data_offset": 2048, 00:07:46.382 "data_size": 63488 00:07:46.382 }, 00:07:46.382 { 00:07:46.382 "name": "BaseBdev2", 00:07:46.382 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:46.382 "is_configured": false, 00:07:46.382 "data_offset": 0, 00:07:46.382 "data_size": 0 00:07:46.382 } 00:07:46.382 ] 00:07:46.382 }' 00:07:46.382 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:46.382 17:26:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:46.641 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:46.899 [2024-07-15 17:26:42.671330] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:46.899 [2024-07-15 17:26:42.671366] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x124602234500 name Existed_Raid, state configuring 00:07:46.899 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:47.158 [2024-07-15 17:26:42.943382] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:47.158 [2024-07-15 17:26:42.944243] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:47.158 [2024-07-15 17:26:42.944282] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:47.158 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:07:47.158 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:47.158 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:47.158 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:47.158 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:47.158 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:47.158 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:47.158 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:47.158 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:47.158 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:47.158 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:47.158 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:47.158 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:47.158 17:26:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:47.725 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:47.725 "name": "Existed_Raid", 00:07:47.725 "uuid": "673c2770-42cf-11ef-96ac-773515fba644", 00:07:47.725 "strip_size_kb": 64, 00:07:47.725 "state": "configuring", 00:07:47.725 "raid_level": "concat", 00:07:47.725 "superblock": true, 00:07:47.726 "num_base_bdevs": 2, 00:07:47.726 "num_base_bdevs_discovered": 1, 00:07:47.726 "num_base_bdevs_operational": 2, 00:07:47.726 "base_bdevs_list": [ 00:07:47.726 { 00:07:47.726 "name": "BaseBdev1", 00:07:47.726 "uuid": "66462d12-42cf-11ef-96ac-773515fba644", 00:07:47.726 "is_configured": true, 00:07:47.726 "data_offset": 2048, 00:07:47.726 "data_size": 63488 00:07:47.726 }, 00:07:47.726 { 00:07:47.726 "name": "BaseBdev2", 00:07:47.726 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:47.726 "is_configured": false, 00:07:47.726 "data_offset": 0, 00:07:47.726 "data_size": 0 00:07:47.726 } 00:07:47.726 ] 00:07:47.726 }' 00:07:47.726 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:47.726 17:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:47.984 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:48.244 [2024-07-15 17:26:43.827520] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:48.244 [2024-07-15 17:26:43.827615] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x124602234a00 00:07:48.244 [2024-07-15 17:26:43.827622] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:48.244 [2024-07-15 17:26:43.827659] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x124602297e20 00:07:48.244 [2024-07-15 17:26:43.827705] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x124602234a00 00:07:48.244 [2024-07-15 17:26:43.827710] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x124602234a00 00:07:48.244 [2024-07-15 17:26:43.827731] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:48.244 BaseBdev2 00:07:48.244 17:26:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:07:48.244 17:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:07:48.244 17:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:48.244 17:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:07:48.244 17:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:48.244 17:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:48.244 17:26:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:48.507 17:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:48.772 [ 00:07:48.772 { 00:07:48.772 "name": "BaseBdev2", 00:07:48.772 "aliases": [ 00:07:48.772 "67c30b5a-42cf-11ef-96ac-773515fba644" 00:07:48.772 ], 00:07:48.772 "product_name": "Malloc disk", 00:07:48.772 "block_size": 512, 00:07:48.772 "num_blocks": 65536, 00:07:48.772 "uuid": "67c30b5a-42cf-11ef-96ac-773515fba644", 00:07:48.772 "assigned_rate_limits": { 00:07:48.772 "rw_ios_per_sec": 0, 00:07:48.772 "rw_mbytes_per_sec": 0, 00:07:48.772 "r_mbytes_per_sec": 0, 00:07:48.772 "w_mbytes_per_sec": 0 00:07:48.772 }, 00:07:48.772 "claimed": true, 00:07:48.772 "claim_type": "exclusive_write", 00:07:48.772 "zoned": false, 00:07:48.772 "supported_io_types": { 00:07:48.772 "read": true, 00:07:48.772 "write": true, 00:07:48.772 "unmap": true, 00:07:48.772 "flush": true, 00:07:48.772 "reset": true, 00:07:48.772 "nvme_admin": false, 00:07:48.772 "nvme_io": false, 00:07:48.772 "nvme_io_md": false, 00:07:48.772 "write_zeroes": true, 00:07:48.772 "zcopy": true, 00:07:48.772 "get_zone_info": false, 00:07:48.772 "zone_management": false, 00:07:48.772 "zone_append": false, 00:07:48.772 "compare": false, 00:07:48.772 "compare_and_write": false, 00:07:48.772 "abort": true, 00:07:48.772 "seek_hole": false, 00:07:48.772 "seek_data": false, 00:07:48.772 "copy": true, 00:07:48.772 "nvme_iov_md": false 00:07:48.772 }, 00:07:48.772 "memory_domains": [ 00:07:48.772 { 00:07:48.772 "dma_device_id": "system", 00:07:48.772 "dma_device_type": 1 00:07:48.772 }, 00:07:48.772 { 00:07:48.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:48.772 "dma_device_type": 2 00:07:48.772 } 00:07:48.772 ], 00:07:48.772 "driver_specific": {} 00:07:48.772 } 00:07:48.772 ] 00:07:48.772 17:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:07:48.773 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:07:48.773 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:48.773 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:48.773 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:48.773 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:48.773 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:48.773 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:48.773 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:48.773 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:48.773 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:48.773 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:48.773 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:48.773 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:48.773 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:49.041 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:49.041 "name": "Existed_Raid", 00:07:49.041 "uuid": "673c2770-42cf-11ef-96ac-773515fba644", 00:07:49.041 "strip_size_kb": 64, 00:07:49.041 "state": "online", 00:07:49.041 "raid_level": "concat", 00:07:49.041 "superblock": true, 00:07:49.041 "num_base_bdevs": 2, 00:07:49.041 "num_base_bdevs_discovered": 2, 00:07:49.041 "num_base_bdevs_operational": 2, 00:07:49.041 "base_bdevs_list": [ 00:07:49.041 { 00:07:49.041 "name": "BaseBdev1", 00:07:49.041 "uuid": "66462d12-42cf-11ef-96ac-773515fba644", 00:07:49.041 "is_configured": true, 00:07:49.041 "data_offset": 2048, 00:07:49.041 "data_size": 63488 00:07:49.041 }, 00:07:49.041 { 00:07:49.041 "name": "BaseBdev2", 00:07:49.041 "uuid": "67c30b5a-42cf-11ef-96ac-773515fba644", 00:07:49.041 "is_configured": true, 00:07:49.041 "data_offset": 2048, 00:07:49.041 "data_size": 63488 00:07:49.041 } 00:07:49.041 ] 00:07:49.041 }' 00:07:49.041 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:49.041 17:26:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:49.312 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:07:49.312 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:07:49.312 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:49.312 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:49.312 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:49.312 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:07:49.312 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:07:49.312 17:26:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:49.585 [2024-07-15 17:26:45.203483] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:49.585 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:49.585 "name": "Existed_Raid", 00:07:49.585 "aliases": [ 00:07:49.585 "673c2770-42cf-11ef-96ac-773515fba644" 00:07:49.585 ], 00:07:49.585 "product_name": "Raid Volume", 00:07:49.585 "block_size": 512, 00:07:49.585 "num_blocks": 126976, 00:07:49.585 "uuid": "673c2770-42cf-11ef-96ac-773515fba644", 00:07:49.585 "assigned_rate_limits": { 00:07:49.585 "rw_ios_per_sec": 0, 00:07:49.585 "rw_mbytes_per_sec": 0, 00:07:49.585 "r_mbytes_per_sec": 0, 00:07:49.585 "w_mbytes_per_sec": 0 00:07:49.585 }, 00:07:49.585 "claimed": false, 00:07:49.585 "zoned": false, 00:07:49.585 "supported_io_types": { 00:07:49.585 "read": true, 00:07:49.585 "write": true, 00:07:49.585 "unmap": true, 00:07:49.585 "flush": true, 00:07:49.585 "reset": true, 00:07:49.585 "nvme_admin": false, 00:07:49.585 "nvme_io": false, 00:07:49.585 "nvme_io_md": false, 00:07:49.585 "write_zeroes": true, 00:07:49.585 "zcopy": false, 00:07:49.585 "get_zone_info": false, 00:07:49.585 "zone_management": false, 00:07:49.585 "zone_append": false, 00:07:49.585 "compare": false, 00:07:49.585 "compare_and_write": false, 00:07:49.585 "abort": false, 00:07:49.585 "seek_hole": false, 00:07:49.585 "seek_data": false, 00:07:49.585 "copy": false, 00:07:49.585 "nvme_iov_md": false 00:07:49.585 }, 00:07:49.585 "memory_domains": [ 00:07:49.585 { 00:07:49.585 "dma_device_id": "system", 00:07:49.585 "dma_device_type": 1 00:07:49.585 }, 00:07:49.585 { 00:07:49.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.585 "dma_device_type": 2 00:07:49.585 }, 00:07:49.585 { 00:07:49.585 "dma_device_id": "system", 00:07:49.585 "dma_device_type": 1 00:07:49.585 }, 00:07:49.585 { 00:07:49.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.585 "dma_device_type": 2 00:07:49.585 } 00:07:49.585 ], 00:07:49.585 "driver_specific": { 00:07:49.585 "raid": { 00:07:49.585 "uuid": "673c2770-42cf-11ef-96ac-773515fba644", 00:07:49.585 "strip_size_kb": 64, 00:07:49.585 "state": "online", 00:07:49.585 "raid_level": "concat", 00:07:49.585 "superblock": true, 00:07:49.585 "num_base_bdevs": 2, 00:07:49.585 "num_base_bdevs_discovered": 2, 00:07:49.585 "num_base_bdevs_operational": 2, 00:07:49.585 "base_bdevs_list": [ 00:07:49.585 { 00:07:49.585 "name": "BaseBdev1", 00:07:49.585 "uuid": "66462d12-42cf-11ef-96ac-773515fba644", 00:07:49.585 "is_configured": true, 00:07:49.585 "data_offset": 2048, 00:07:49.585 "data_size": 63488 00:07:49.585 }, 00:07:49.585 { 00:07:49.585 "name": "BaseBdev2", 00:07:49.585 "uuid": "67c30b5a-42cf-11ef-96ac-773515fba644", 00:07:49.585 "is_configured": true, 00:07:49.585 "data_offset": 2048, 00:07:49.585 "data_size": 63488 00:07:49.585 } 00:07:49.585 ] 00:07:49.585 } 00:07:49.585 } 00:07:49.585 }' 00:07:49.585 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:49.585 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:07:49.585 BaseBdev2' 00:07:49.585 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:49.585 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:07:49.585 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:49.859 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:49.859 "name": "BaseBdev1", 00:07:49.859 "aliases": [ 00:07:49.859 "66462d12-42cf-11ef-96ac-773515fba644" 00:07:49.859 ], 00:07:49.859 "product_name": "Malloc disk", 00:07:49.859 "block_size": 512, 00:07:49.859 "num_blocks": 65536, 00:07:49.859 "uuid": "66462d12-42cf-11ef-96ac-773515fba644", 00:07:49.859 "assigned_rate_limits": { 00:07:49.859 "rw_ios_per_sec": 0, 00:07:49.859 "rw_mbytes_per_sec": 0, 00:07:49.859 "r_mbytes_per_sec": 0, 00:07:49.859 "w_mbytes_per_sec": 0 00:07:49.859 }, 00:07:49.859 "claimed": true, 00:07:49.859 "claim_type": "exclusive_write", 00:07:49.859 "zoned": false, 00:07:49.859 "supported_io_types": { 00:07:49.859 "read": true, 00:07:49.859 "write": true, 00:07:49.859 "unmap": true, 00:07:49.859 "flush": true, 00:07:49.859 "reset": true, 00:07:49.859 "nvme_admin": false, 00:07:49.859 "nvme_io": false, 00:07:49.859 "nvme_io_md": false, 00:07:49.859 "write_zeroes": true, 00:07:49.859 "zcopy": true, 00:07:49.859 "get_zone_info": false, 00:07:49.859 "zone_management": false, 00:07:49.859 "zone_append": false, 00:07:49.859 "compare": false, 00:07:49.859 "compare_and_write": false, 00:07:49.859 "abort": true, 00:07:49.859 "seek_hole": false, 00:07:49.859 "seek_data": false, 00:07:49.859 "copy": true, 00:07:49.859 "nvme_iov_md": false 00:07:49.859 }, 00:07:49.860 "memory_domains": [ 00:07:49.860 { 00:07:49.860 "dma_device_id": "system", 00:07:49.860 "dma_device_type": 1 00:07:49.860 }, 00:07:49.860 { 00:07:49.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:49.860 "dma_device_type": 2 00:07:49.860 } 00:07:49.860 ], 00:07:49.860 "driver_specific": {} 00:07:49.860 }' 00:07:49.860 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:49.860 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:49.860 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:49.860 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:49.860 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:49.860 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:49.860 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:49.860 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:49.860 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:49.860 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:49.860 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:49.860 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:49.860 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:49.860 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:07:49.860 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:50.122 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:50.122 "name": "BaseBdev2", 00:07:50.122 "aliases": [ 00:07:50.122 "67c30b5a-42cf-11ef-96ac-773515fba644" 00:07:50.122 ], 00:07:50.122 "product_name": "Malloc disk", 00:07:50.122 "block_size": 512, 00:07:50.122 "num_blocks": 65536, 00:07:50.122 "uuid": "67c30b5a-42cf-11ef-96ac-773515fba644", 00:07:50.122 "assigned_rate_limits": { 00:07:50.122 "rw_ios_per_sec": 0, 00:07:50.122 "rw_mbytes_per_sec": 0, 00:07:50.122 "r_mbytes_per_sec": 0, 00:07:50.122 "w_mbytes_per_sec": 0 00:07:50.122 }, 00:07:50.122 "claimed": true, 00:07:50.122 "claim_type": "exclusive_write", 00:07:50.122 "zoned": false, 00:07:50.122 "supported_io_types": { 00:07:50.122 "read": true, 00:07:50.122 "write": true, 00:07:50.122 "unmap": true, 00:07:50.122 "flush": true, 00:07:50.122 "reset": true, 00:07:50.122 "nvme_admin": false, 00:07:50.122 "nvme_io": false, 00:07:50.122 "nvme_io_md": false, 00:07:50.122 "write_zeroes": true, 00:07:50.122 "zcopy": true, 00:07:50.122 "get_zone_info": false, 00:07:50.122 "zone_management": false, 00:07:50.122 "zone_append": false, 00:07:50.122 "compare": false, 00:07:50.122 "compare_and_write": false, 00:07:50.122 "abort": true, 00:07:50.122 "seek_hole": false, 00:07:50.122 "seek_data": false, 00:07:50.122 "copy": true, 00:07:50.122 "nvme_iov_md": false 00:07:50.122 }, 00:07:50.122 "memory_domains": [ 00:07:50.122 { 00:07:50.122 "dma_device_id": "system", 00:07:50.122 "dma_device_type": 1 00:07:50.122 }, 00:07:50.122 { 00:07:50.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.122 "dma_device_type": 2 00:07:50.122 } 00:07:50.122 ], 00:07:50.122 "driver_specific": {} 00:07:50.122 }' 00:07:50.122 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:50.122 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:50.122 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:50.122 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:50.122 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:50.122 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:50.122 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:50.122 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:50.122 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:50.122 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:50.122 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:50.122 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:50.122 17:26:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:50.380 [2024-07-15 17:26:46.163546] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:50.380 [2024-07-15 17:26:46.163570] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:50.380 [2024-07-15 17:26:46.163591] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:50.380 17:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:07:50.380 17:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:07:50.380 17:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:50.380 17:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:07:50.380 17:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:07:50.380 17:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:50.380 17:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:50.380 17:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:07:50.380 17:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:50.380 17:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:50.380 17:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:07:50.380 17:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:50.380 17:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:50.380 17:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:50.380 17:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:50.380 17:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.380 17:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:50.638 17:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:50.638 "name": "Existed_Raid", 00:07:50.638 "uuid": "673c2770-42cf-11ef-96ac-773515fba644", 00:07:50.638 "strip_size_kb": 64, 00:07:50.638 "state": "offline", 00:07:50.638 "raid_level": "concat", 00:07:50.638 "superblock": true, 00:07:50.638 "num_base_bdevs": 2, 00:07:50.638 "num_base_bdevs_discovered": 1, 00:07:50.638 "num_base_bdevs_operational": 1, 00:07:50.638 "base_bdevs_list": [ 00:07:50.638 { 00:07:50.638 "name": null, 00:07:50.638 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.638 "is_configured": false, 00:07:50.638 "data_offset": 2048, 00:07:50.638 "data_size": 63488 00:07:50.638 }, 00:07:50.638 { 00:07:50.638 "name": "BaseBdev2", 00:07:50.638 "uuid": "67c30b5a-42cf-11ef-96ac-773515fba644", 00:07:50.638 "is_configured": true, 00:07:50.638 "data_offset": 2048, 00:07:50.638 "data_size": 63488 00:07:50.638 } 00:07:50.638 ] 00:07:50.638 }' 00:07:50.638 17:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:50.638 17:26:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.204 17:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:07:51.204 17:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:51.204 17:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:07:51.204 17:26:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:51.204 17:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:07:51.204 17:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:51.204 17:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:51.462 [2024-07-15 17:26:47.253596] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:51.462 [2024-07-15 17:26:47.253645] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x124602234a00 name Existed_Raid, state offline 00:07:51.462 17:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:07:51.462 17:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:51.462 17:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:07:51.462 17:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:51.721 17:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:07:51.721 17:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:07:51.721 17:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:07:51.721 17:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 49971 00:07:51.721 17:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 49971 ']' 00:07:51.721 17:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 49971 00:07:51.721 17:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:07:51.721 17:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:51.721 17:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 49971 00:07:51.721 17:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:07:51.721 17:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:07:51.721 killing process with pid 49971 00:07:51.721 17:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:07:51.721 17:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 49971' 00:07:51.721 17:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 49971 00:07:51.721 [2024-07-15 17:26:47.521382] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:51.721 [2024-07-15 17:26:47.521416] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:51.721 17:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 49971 00:07:51.980 17:26:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:07:51.980 00:07:51.980 real 0m9.134s 00:07:51.980 user 0m16.010s 00:07:51.980 sys 0m1.498s 00:07:51.980 ************************************ 00:07:51.980 END TEST raid_state_function_test_sb 00:07:51.980 ************************************ 00:07:51.980 17:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.980 17:26:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:51.980 17:26:47 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:07:51.980 17:26:47 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:07:51.980 17:26:47 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:51.980 17:26:47 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.980 17:26:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:51.980 ************************************ 00:07:51.980 START TEST raid_superblock_test 00:07:51.980 ************************************ 00:07:51.980 17:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 2 00:07:51.980 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:07:51.980 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:07:51.980 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:07:51.980 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:07:51.980 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:07:51.980 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:07:51.980 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:07:51.980 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:07:51.980 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:07:51.980 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:07:51.980 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:07:51.980 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:07:51.980 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:07:51.980 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:07:51.980 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:07:51.980 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:07:51.980 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=50245 00:07:51.980 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 50245 /var/tmp/spdk-raid.sock 00:07:51.980 17:26:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:07:51.980 17:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 50245 ']' 00:07:51.980 17:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:51.980 17:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:51.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:51.980 17:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:51.980 17:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:51.980 17:26:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.980 [2024-07-15 17:26:47.756996] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:07:51.980 [2024-07-15 17:26:47.757179] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:52.568 EAL: TSC is not safe to use in SMP mode 00:07:52.568 EAL: TSC is not invariant 00:07:52.568 [2024-07-15 17:26:48.292048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.568 [2024-07-15 17:26:48.380197] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:52.568 [2024-07-15 17:26:48.382381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.568 [2024-07-15 17:26:48.383175] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:52.568 [2024-07-15 17:26:48.383190] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:53.134 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:53.134 17:26:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:07:53.134 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:07:53.134 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:07:53.134 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:07:53.134 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:07:53.134 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:53.134 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:53.134 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:07:53.134 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:53.134 17:26:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:07:53.392 malloc1 00:07:53.392 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:53.651 [2024-07-15 17:26:49.287210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:53.651 [2024-07-15 17:26:49.287285] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:53.651 [2024-07-15 17:26:49.287314] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c0e17a34780 00:07:53.651 [2024-07-15 17:26:49.287322] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:53.651 [2024-07-15 17:26:49.288228] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:53.651 [2024-07-15 17:26:49.288253] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:53.651 pt1 00:07:53.651 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:07:53.651 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:07:53.651 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:07:53.651 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:07:53.651 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:53.651 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:53.651 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:07:53.651 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:53.651 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:07:53.910 malloc2 00:07:53.910 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:54.168 [2024-07-15 17:26:49.795233] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:54.168 [2024-07-15 17:26:49.795301] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:54.168 [2024-07-15 17:26:49.795328] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c0e17a34c80 00:07:54.168 [2024-07-15 17:26:49.795336] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:54.168 [2024-07-15 17:26:49.796011] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:54.168 [2024-07-15 17:26:49.796036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:54.168 pt2 00:07:54.168 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:07:54.168 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:07:54.168 17:26:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:07:54.426 [2024-07-15 17:26:50.027264] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:54.426 [2024-07-15 17:26:50.027863] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:54.426 [2024-07-15 17:26:50.027910] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c0e17a34f00 00:07:54.426 [2024-07-15 17:26:50.027917] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:54.426 [2024-07-15 17:26:50.027946] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c0e17a97e20 00:07:54.426 [2024-07-15 17:26:50.028023] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c0e17a34f00 00:07:54.426 [2024-07-15 17:26:50.028028] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1c0e17a34f00 00:07:54.426 [2024-07-15 17:26:50.028052] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:54.426 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:54.426 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:54.426 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:54.426 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:54.426 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:54.426 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:54.426 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:54.426 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:54.426 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:54.426 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:54.426 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:54.426 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:54.685 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:54.685 "name": "raid_bdev1", 00:07:54.685 "uuid": "6b7511b6-42cf-11ef-96ac-773515fba644", 00:07:54.685 "strip_size_kb": 64, 00:07:54.685 "state": "online", 00:07:54.685 "raid_level": "concat", 00:07:54.685 "superblock": true, 00:07:54.685 "num_base_bdevs": 2, 00:07:54.685 "num_base_bdevs_discovered": 2, 00:07:54.685 "num_base_bdevs_operational": 2, 00:07:54.685 "base_bdevs_list": [ 00:07:54.685 { 00:07:54.685 "name": "pt1", 00:07:54.685 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:54.685 "is_configured": true, 00:07:54.685 "data_offset": 2048, 00:07:54.685 "data_size": 63488 00:07:54.685 }, 00:07:54.685 { 00:07:54.685 "name": "pt2", 00:07:54.685 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:54.685 "is_configured": true, 00:07:54.685 "data_offset": 2048, 00:07:54.685 "data_size": 63488 00:07:54.685 } 00:07:54.685 ] 00:07:54.685 }' 00:07:54.686 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:54.686 17:26:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.958 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:07:54.958 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:07:54.958 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:54.958 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:54.958 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:54.958 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:54.958 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:54.958 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:55.216 [2024-07-15 17:26:50.883351] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:55.216 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:55.216 "name": "raid_bdev1", 00:07:55.216 "aliases": [ 00:07:55.216 "6b7511b6-42cf-11ef-96ac-773515fba644" 00:07:55.216 ], 00:07:55.216 "product_name": "Raid Volume", 00:07:55.216 "block_size": 512, 00:07:55.216 "num_blocks": 126976, 00:07:55.216 "uuid": "6b7511b6-42cf-11ef-96ac-773515fba644", 00:07:55.216 "assigned_rate_limits": { 00:07:55.216 "rw_ios_per_sec": 0, 00:07:55.216 "rw_mbytes_per_sec": 0, 00:07:55.216 "r_mbytes_per_sec": 0, 00:07:55.216 "w_mbytes_per_sec": 0 00:07:55.216 }, 00:07:55.216 "claimed": false, 00:07:55.216 "zoned": false, 00:07:55.216 "supported_io_types": { 00:07:55.216 "read": true, 00:07:55.216 "write": true, 00:07:55.216 "unmap": true, 00:07:55.216 "flush": true, 00:07:55.216 "reset": true, 00:07:55.216 "nvme_admin": false, 00:07:55.216 "nvme_io": false, 00:07:55.216 "nvme_io_md": false, 00:07:55.216 "write_zeroes": true, 00:07:55.216 "zcopy": false, 00:07:55.216 "get_zone_info": false, 00:07:55.216 "zone_management": false, 00:07:55.216 "zone_append": false, 00:07:55.216 "compare": false, 00:07:55.216 "compare_and_write": false, 00:07:55.216 "abort": false, 00:07:55.216 "seek_hole": false, 00:07:55.216 "seek_data": false, 00:07:55.216 "copy": false, 00:07:55.216 "nvme_iov_md": false 00:07:55.216 }, 00:07:55.216 "memory_domains": [ 00:07:55.216 { 00:07:55.216 "dma_device_id": "system", 00:07:55.216 "dma_device_type": 1 00:07:55.216 }, 00:07:55.216 { 00:07:55.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.216 "dma_device_type": 2 00:07:55.216 }, 00:07:55.216 { 00:07:55.216 "dma_device_id": "system", 00:07:55.216 "dma_device_type": 1 00:07:55.216 }, 00:07:55.216 { 00:07:55.216 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.217 "dma_device_type": 2 00:07:55.217 } 00:07:55.217 ], 00:07:55.217 "driver_specific": { 00:07:55.217 "raid": { 00:07:55.217 "uuid": "6b7511b6-42cf-11ef-96ac-773515fba644", 00:07:55.217 "strip_size_kb": 64, 00:07:55.217 "state": "online", 00:07:55.217 "raid_level": "concat", 00:07:55.217 "superblock": true, 00:07:55.217 "num_base_bdevs": 2, 00:07:55.217 "num_base_bdevs_discovered": 2, 00:07:55.217 "num_base_bdevs_operational": 2, 00:07:55.217 "base_bdevs_list": [ 00:07:55.217 { 00:07:55.217 "name": "pt1", 00:07:55.217 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:55.217 "is_configured": true, 00:07:55.217 "data_offset": 2048, 00:07:55.217 "data_size": 63488 00:07:55.217 }, 00:07:55.217 { 00:07:55.217 "name": "pt2", 00:07:55.217 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:55.217 "is_configured": true, 00:07:55.217 "data_offset": 2048, 00:07:55.217 "data_size": 63488 00:07:55.217 } 00:07:55.217 ] 00:07:55.217 } 00:07:55.217 } 00:07:55.217 }' 00:07:55.217 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:55.217 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:07:55.217 pt2' 00:07:55.217 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:55.217 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:07:55.217 17:26:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:55.476 17:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:55.476 "name": "pt1", 00:07:55.476 "aliases": [ 00:07:55.476 "00000000-0000-0000-0000-000000000001" 00:07:55.476 ], 00:07:55.476 "product_name": "passthru", 00:07:55.476 "block_size": 512, 00:07:55.476 "num_blocks": 65536, 00:07:55.476 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:55.476 "assigned_rate_limits": { 00:07:55.476 "rw_ios_per_sec": 0, 00:07:55.476 "rw_mbytes_per_sec": 0, 00:07:55.476 "r_mbytes_per_sec": 0, 00:07:55.476 "w_mbytes_per_sec": 0 00:07:55.476 }, 00:07:55.476 "claimed": true, 00:07:55.476 "claim_type": "exclusive_write", 00:07:55.476 "zoned": false, 00:07:55.476 "supported_io_types": { 00:07:55.476 "read": true, 00:07:55.476 "write": true, 00:07:55.476 "unmap": true, 00:07:55.476 "flush": true, 00:07:55.476 "reset": true, 00:07:55.476 "nvme_admin": false, 00:07:55.476 "nvme_io": false, 00:07:55.476 "nvme_io_md": false, 00:07:55.476 "write_zeroes": true, 00:07:55.476 "zcopy": true, 00:07:55.476 "get_zone_info": false, 00:07:55.476 "zone_management": false, 00:07:55.476 "zone_append": false, 00:07:55.476 "compare": false, 00:07:55.476 "compare_and_write": false, 00:07:55.476 "abort": true, 00:07:55.476 "seek_hole": false, 00:07:55.476 "seek_data": false, 00:07:55.476 "copy": true, 00:07:55.476 "nvme_iov_md": false 00:07:55.476 }, 00:07:55.476 "memory_domains": [ 00:07:55.476 { 00:07:55.476 "dma_device_id": "system", 00:07:55.476 "dma_device_type": 1 00:07:55.476 }, 00:07:55.476 { 00:07:55.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.476 "dma_device_type": 2 00:07:55.476 } 00:07:55.476 ], 00:07:55.476 "driver_specific": { 00:07:55.476 "passthru": { 00:07:55.476 "name": "pt1", 00:07:55.476 "base_bdev_name": "malloc1" 00:07:55.476 } 00:07:55.476 } 00:07:55.476 }' 00:07:55.476 17:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:55.476 17:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:55.476 17:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:55.476 17:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:55.476 17:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:55.476 17:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:55.476 17:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:55.476 17:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:55.476 17:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:55.476 17:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:55.476 17:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:55.476 17:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:55.476 17:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:55.476 17:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:55.476 17:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:07:55.736 17:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:55.736 "name": "pt2", 00:07:55.736 "aliases": [ 00:07:55.736 "00000000-0000-0000-0000-000000000002" 00:07:55.736 ], 00:07:55.736 "product_name": "passthru", 00:07:55.736 "block_size": 512, 00:07:55.736 "num_blocks": 65536, 00:07:55.736 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:55.736 "assigned_rate_limits": { 00:07:55.736 "rw_ios_per_sec": 0, 00:07:55.736 "rw_mbytes_per_sec": 0, 00:07:55.736 "r_mbytes_per_sec": 0, 00:07:55.736 "w_mbytes_per_sec": 0 00:07:55.736 }, 00:07:55.736 "claimed": true, 00:07:55.736 "claim_type": "exclusive_write", 00:07:55.736 "zoned": false, 00:07:55.736 "supported_io_types": { 00:07:55.736 "read": true, 00:07:55.736 "write": true, 00:07:55.736 "unmap": true, 00:07:55.736 "flush": true, 00:07:55.736 "reset": true, 00:07:55.736 "nvme_admin": false, 00:07:55.736 "nvme_io": false, 00:07:55.736 "nvme_io_md": false, 00:07:55.736 "write_zeroes": true, 00:07:55.736 "zcopy": true, 00:07:55.736 "get_zone_info": false, 00:07:55.736 "zone_management": false, 00:07:55.736 "zone_append": false, 00:07:55.736 "compare": false, 00:07:55.736 "compare_and_write": false, 00:07:55.736 "abort": true, 00:07:55.736 "seek_hole": false, 00:07:55.736 "seek_data": false, 00:07:55.736 "copy": true, 00:07:55.736 "nvme_iov_md": false 00:07:55.736 }, 00:07:55.736 "memory_domains": [ 00:07:55.736 { 00:07:55.736 "dma_device_id": "system", 00:07:55.736 "dma_device_type": 1 00:07:55.736 }, 00:07:55.736 { 00:07:55.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.736 "dma_device_type": 2 00:07:55.736 } 00:07:55.736 ], 00:07:55.736 "driver_specific": { 00:07:55.736 "passthru": { 00:07:55.736 "name": "pt2", 00:07:55.736 "base_bdev_name": "malloc2" 00:07:55.736 } 00:07:55.736 } 00:07:55.736 }' 00:07:55.736 17:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:55.736 17:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:55.736 17:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:55.736 17:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:55.736 17:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:55.736 17:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:55.736 17:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:55.736 17:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:55.994 17:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:55.994 17:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:55.994 17:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:55.994 17:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:55.994 17:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:55.994 17:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:07:56.273 [2024-07-15 17:26:51.843371] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.273 17:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=6b7511b6-42cf-11ef-96ac-773515fba644 00:07:56.273 17:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 6b7511b6-42cf-11ef-96ac-773515fba644 ']' 00:07:56.273 17:26:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:56.273 [2024-07-15 17:26:52.067335] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:56.273 [2024-07-15 17:26:52.067360] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:56.273 [2024-07-15 17:26:52.067397] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:56.273 [2024-07-15 17:26:52.067409] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:56.273 [2024-07-15 17:26:52.067414] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c0e17a34f00 name raid_bdev1, state offline 00:07:56.273 17:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:56.273 17:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:07:56.531 17:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:07:56.531 17:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:07:56.531 17:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:07:56.531 17:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:07:56.790 17:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:07:56.790 17:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:07:57.049 17:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:07:57.049 17:26:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:57.308 17:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:07:57.308 17:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:07:57.308 17:26:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:07:57.308 17:26:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:07:57.308 17:26:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:57.308 17:26:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:57.308 17:26:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:57.308 17:26:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:57.308 17:26:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:57.308 17:26:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:57.308 17:26:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:57.308 17:26:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:57.308 17:26:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:07:57.566 [2024-07-15 17:26:53.311391] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:57.566 [2024-07-15 17:26:53.312036] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:57.566 [2024-07-15 17:26:53.312069] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:57.566 [2024-07-15 17:26:53.312103] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:57.566 [2024-07-15 17:26:53.312114] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:57.566 [2024-07-15 17:26:53.312118] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c0e17a34c80 name raid_bdev1, state configuring 00:07:57.566 request: 00:07:57.566 { 00:07:57.566 "name": "raid_bdev1", 00:07:57.566 "raid_level": "concat", 00:07:57.566 "base_bdevs": [ 00:07:57.566 "malloc1", 00:07:57.566 "malloc2" 00:07:57.566 ], 00:07:57.566 "strip_size_kb": 64, 00:07:57.566 "superblock": false, 00:07:57.566 "method": "bdev_raid_create", 00:07:57.566 "req_id": 1 00:07:57.566 } 00:07:57.566 Got JSON-RPC error response 00:07:57.566 response: 00:07:57.566 { 00:07:57.566 "code": -17, 00:07:57.566 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:57.566 } 00:07:57.566 17:26:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:07:57.566 17:26:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:57.566 17:26:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:57.566 17:26:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:57.566 17:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:07:57.566 17:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:57.824 17:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:07:57.824 17:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:07:57.824 17:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:58.082 [2024-07-15 17:26:53.771381] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:58.082 [2024-07-15 17:26:53.771447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.082 [2024-07-15 17:26:53.771474] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c0e17a34780 00:07:58.082 [2024-07-15 17:26:53.771482] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.082 [2024-07-15 17:26:53.772128] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.082 [2024-07-15 17:26:53.772156] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:58.082 [2024-07-15 17:26:53.772180] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:58.082 [2024-07-15 17:26:53.772193] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:58.082 pt1 00:07:58.082 17:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:07:58.082 17:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:58.082 17:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:58.082 17:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:58.082 17:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:58.082 17:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:58.082 17:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:58.082 17:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:58.082 17:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:58.082 17:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:58.082 17:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:58.082 17:26:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.341 17:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:58.341 "name": "raid_bdev1", 00:07:58.341 "uuid": "6b7511b6-42cf-11ef-96ac-773515fba644", 00:07:58.341 "strip_size_kb": 64, 00:07:58.341 "state": "configuring", 00:07:58.341 "raid_level": "concat", 00:07:58.341 "superblock": true, 00:07:58.341 "num_base_bdevs": 2, 00:07:58.341 "num_base_bdevs_discovered": 1, 00:07:58.341 "num_base_bdevs_operational": 2, 00:07:58.341 "base_bdevs_list": [ 00:07:58.341 { 00:07:58.341 "name": "pt1", 00:07:58.341 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:58.341 "is_configured": true, 00:07:58.341 "data_offset": 2048, 00:07:58.341 "data_size": 63488 00:07:58.341 }, 00:07:58.341 { 00:07:58.341 "name": null, 00:07:58.341 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:58.341 "is_configured": false, 00:07:58.341 "data_offset": 2048, 00:07:58.341 "data_size": 63488 00:07:58.341 } 00:07:58.341 ] 00:07:58.341 }' 00:07:58.341 17:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:58.341 17:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.599 17:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:07:58.599 17:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:07:58.599 17:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:07:58.599 17:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:58.858 [2024-07-15 17:26:54.611423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:58.858 [2024-07-15 17:26:54.611476] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:58.858 [2024-07-15 17:26:54.611488] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c0e17a34f00 00:07:58.858 [2024-07-15 17:26:54.611495] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:58.858 [2024-07-15 17:26:54.611606] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:58.858 [2024-07-15 17:26:54.611617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:58.858 [2024-07-15 17:26:54.611639] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:58.858 [2024-07-15 17:26:54.611648] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:58.858 [2024-07-15 17:26:54.611672] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c0e17a35180 00:07:58.858 [2024-07-15 17:26:54.611676] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:58.858 [2024-07-15 17:26:54.611696] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c0e17a97e20 00:07:58.858 [2024-07-15 17:26:54.611751] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c0e17a35180 00:07:58.858 [2024-07-15 17:26:54.611756] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1c0e17a35180 00:07:58.858 [2024-07-15 17:26:54.611786] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:58.858 pt2 00:07:58.858 17:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:07:58.858 17:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:07:58.858 17:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:07:58.858 17:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:58.858 17:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:58.858 17:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:58.858 17:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:58.858 17:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:58.858 17:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:58.858 17:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:58.858 17:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:58.858 17:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:58.858 17:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:58.858 17:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:59.116 17:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:59.116 "name": "raid_bdev1", 00:07:59.116 "uuid": "6b7511b6-42cf-11ef-96ac-773515fba644", 00:07:59.116 "strip_size_kb": 64, 00:07:59.116 "state": "online", 00:07:59.116 "raid_level": "concat", 00:07:59.116 "superblock": true, 00:07:59.116 "num_base_bdevs": 2, 00:07:59.117 "num_base_bdevs_discovered": 2, 00:07:59.117 "num_base_bdevs_operational": 2, 00:07:59.117 "base_bdevs_list": [ 00:07:59.117 { 00:07:59.117 "name": "pt1", 00:07:59.117 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:59.117 "is_configured": true, 00:07:59.117 "data_offset": 2048, 00:07:59.117 "data_size": 63488 00:07:59.117 }, 00:07:59.117 { 00:07:59.117 "name": "pt2", 00:07:59.117 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:59.117 "is_configured": true, 00:07:59.117 "data_offset": 2048, 00:07:59.117 "data_size": 63488 00:07:59.117 } 00:07:59.117 ] 00:07:59.117 }' 00:07:59.117 17:26:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:59.117 17:26:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.374 17:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:07:59.374 17:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:07:59.374 17:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:59.374 17:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:59.374 17:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:59.374 17:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:59.374 17:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:59.374 17:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:59.632 [2024-07-15 17:26:55.383515] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:59.632 17:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:59.632 "name": "raid_bdev1", 00:07:59.632 "aliases": [ 00:07:59.632 "6b7511b6-42cf-11ef-96ac-773515fba644" 00:07:59.632 ], 00:07:59.632 "product_name": "Raid Volume", 00:07:59.632 "block_size": 512, 00:07:59.632 "num_blocks": 126976, 00:07:59.632 "uuid": "6b7511b6-42cf-11ef-96ac-773515fba644", 00:07:59.632 "assigned_rate_limits": { 00:07:59.632 "rw_ios_per_sec": 0, 00:07:59.632 "rw_mbytes_per_sec": 0, 00:07:59.632 "r_mbytes_per_sec": 0, 00:07:59.632 "w_mbytes_per_sec": 0 00:07:59.632 }, 00:07:59.632 "claimed": false, 00:07:59.632 "zoned": false, 00:07:59.632 "supported_io_types": { 00:07:59.632 "read": true, 00:07:59.632 "write": true, 00:07:59.632 "unmap": true, 00:07:59.633 "flush": true, 00:07:59.633 "reset": true, 00:07:59.633 "nvme_admin": false, 00:07:59.633 "nvme_io": false, 00:07:59.633 "nvme_io_md": false, 00:07:59.633 "write_zeroes": true, 00:07:59.633 "zcopy": false, 00:07:59.633 "get_zone_info": false, 00:07:59.633 "zone_management": false, 00:07:59.633 "zone_append": false, 00:07:59.633 "compare": false, 00:07:59.633 "compare_and_write": false, 00:07:59.633 "abort": false, 00:07:59.633 "seek_hole": false, 00:07:59.633 "seek_data": false, 00:07:59.633 "copy": false, 00:07:59.633 "nvme_iov_md": false 00:07:59.633 }, 00:07:59.633 "memory_domains": [ 00:07:59.633 { 00:07:59.633 "dma_device_id": "system", 00:07:59.633 "dma_device_type": 1 00:07:59.633 }, 00:07:59.633 { 00:07:59.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.633 "dma_device_type": 2 00:07:59.633 }, 00:07:59.633 { 00:07:59.633 "dma_device_id": "system", 00:07:59.633 "dma_device_type": 1 00:07:59.633 }, 00:07:59.633 { 00:07:59.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.633 "dma_device_type": 2 00:07:59.633 } 00:07:59.633 ], 00:07:59.633 "driver_specific": { 00:07:59.633 "raid": { 00:07:59.633 "uuid": "6b7511b6-42cf-11ef-96ac-773515fba644", 00:07:59.633 "strip_size_kb": 64, 00:07:59.633 "state": "online", 00:07:59.633 "raid_level": "concat", 00:07:59.633 "superblock": true, 00:07:59.633 "num_base_bdevs": 2, 00:07:59.633 "num_base_bdevs_discovered": 2, 00:07:59.633 "num_base_bdevs_operational": 2, 00:07:59.633 "base_bdevs_list": [ 00:07:59.633 { 00:07:59.633 "name": "pt1", 00:07:59.633 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:59.633 "is_configured": true, 00:07:59.633 "data_offset": 2048, 00:07:59.633 "data_size": 63488 00:07:59.633 }, 00:07:59.633 { 00:07:59.633 "name": "pt2", 00:07:59.633 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:59.633 "is_configured": true, 00:07:59.633 "data_offset": 2048, 00:07:59.633 "data_size": 63488 00:07:59.633 } 00:07:59.633 ] 00:07:59.633 } 00:07:59.633 } 00:07:59.633 }' 00:07:59.633 17:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:59.633 17:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:07:59.633 pt2' 00:07:59.633 17:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:59.633 17:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:07:59.633 17:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:59.891 17:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:59.891 "name": "pt1", 00:07:59.891 "aliases": [ 00:07:59.891 "00000000-0000-0000-0000-000000000001" 00:07:59.891 ], 00:07:59.891 "product_name": "passthru", 00:07:59.891 "block_size": 512, 00:07:59.891 "num_blocks": 65536, 00:07:59.891 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:59.891 "assigned_rate_limits": { 00:07:59.891 "rw_ios_per_sec": 0, 00:07:59.891 "rw_mbytes_per_sec": 0, 00:07:59.891 "r_mbytes_per_sec": 0, 00:07:59.891 "w_mbytes_per_sec": 0 00:07:59.891 }, 00:07:59.891 "claimed": true, 00:07:59.891 "claim_type": "exclusive_write", 00:07:59.891 "zoned": false, 00:07:59.891 "supported_io_types": { 00:07:59.891 "read": true, 00:07:59.891 "write": true, 00:07:59.891 "unmap": true, 00:07:59.891 "flush": true, 00:07:59.891 "reset": true, 00:07:59.891 "nvme_admin": false, 00:07:59.891 "nvme_io": false, 00:07:59.891 "nvme_io_md": false, 00:07:59.891 "write_zeroes": true, 00:07:59.891 "zcopy": true, 00:07:59.891 "get_zone_info": false, 00:07:59.891 "zone_management": false, 00:07:59.891 "zone_append": false, 00:07:59.891 "compare": false, 00:07:59.891 "compare_and_write": false, 00:07:59.891 "abort": true, 00:07:59.891 "seek_hole": false, 00:07:59.891 "seek_data": false, 00:07:59.891 "copy": true, 00:07:59.891 "nvme_iov_md": false 00:07:59.891 }, 00:07:59.891 "memory_domains": [ 00:07:59.891 { 00:07:59.891 "dma_device_id": "system", 00:07:59.891 "dma_device_type": 1 00:07:59.891 }, 00:07:59.891 { 00:07:59.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:59.891 "dma_device_type": 2 00:07:59.891 } 00:07:59.891 ], 00:07:59.891 "driver_specific": { 00:07:59.891 "passthru": { 00:07:59.891 "name": "pt1", 00:07:59.891 "base_bdev_name": "malloc1" 00:07:59.891 } 00:07:59.891 } 00:07:59.891 }' 00:07:59.891 17:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:00.149 17:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:00.149 17:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:00.149 17:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:00.149 17:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:00.149 17:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:00.149 17:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:00.149 17:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:00.149 17:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:00.149 17:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:00.149 17:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:00.149 17:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:00.149 17:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:00.149 17:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:08:00.149 17:26:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:00.406 17:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:00.406 "name": "pt2", 00:08:00.406 "aliases": [ 00:08:00.406 "00000000-0000-0000-0000-000000000002" 00:08:00.406 ], 00:08:00.406 "product_name": "passthru", 00:08:00.406 "block_size": 512, 00:08:00.406 "num_blocks": 65536, 00:08:00.406 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:00.406 "assigned_rate_limits": { 00:08:00.406 "rw_ios_per_sec": 0, 00:08:00.406 "rw_mbytes_per_sec": 0, 00:08:00.406 "r_mbytes_per_sec": 0, 00:08:00.406 "w_mbytes_per_sec": 0 00:08:00.406 }, 00:08:00.406 "claimed": true, 00:08:00.406 "claim_type": "exclusive_write", 00:08:00.406 "zoned": false, 00:08:00.406 "supported_io_types": { 00:08:00.406 "read": true, 00:08:00.406 "write": true, 00:08:00.406 "unmap": true, 00:08:00.406 "flush": true, 00:08:00.406 "reset": true, 00:08:00.406 "nvme_admin": false, 00:08:00.406 "nvme_io": false, 00:08:00.406 "nvme_io_md": false, 00:08:00.406 "write_zeroes": true, 00:08:00.406 "zcopy": true, 00:08:00.406 "get_zone_info": false, 00:08:00.406 "zone_management": false, 00:08:00.406 "zone_append": false, 00:08:00.406 "compare": false, 00:08:00.406 "compare_and_write": false, 00:08:00.406 "abort": true, 00:08:00.406 "seek_hole": false, 00:08:00.406 "seek_data": false, 00:08:00.406 "copy": true, 00:08:00.406 "nvme_iov_md": false 00:08:00.406 }, 00:08:00.406 "memory_domains": [ 00:08:00.406 { 00:08:00.406 "dma_device_id": "system", 00:08:00.406 "dma_device_type": 1 00:08:00.406 }, 00:08:00.406 { 00:08:00.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.406 "dma_device_type": 2 00:08:00.406 } 00:08:00.406 ], 00:08:00.406 "driver_specific": { 00:08:00.406 "passthru": { 00:08:00.406 "name": "pt2", 00:08:00.406 "base_bdev_name": "malloc2" 00:08:00.406 } 00:08:00.406 } 00:08:00.406 }' 00:08:00.406 17:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:00.406 17:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:00.406 17:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:00.406 17:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:00.406 17:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:00.406 17:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:00.406 17:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:00.406 17:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:00.406 17:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:00.406 17:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:00.406 17:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:00.406 17:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:00.406 17:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:00.406 17:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:08:00.664 [2024-07-15 17:26:56.343643] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:00.664 17:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 6b7511b6-42cf-11ef-96ac-773515fba644 '!=' 6b7511b6-42cf-11ef-96ac-773515fba644 ']' 00:08:00.664 17:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:08:00.664 17:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:00.664 17:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:08:00.664 17:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 50245 00:08:00.664 17:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 50245 ']' 00:08:00.664 17:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 50245 00:08:00.664 17:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:08:00.664 17:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:08:00.664 17:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 50245 00:08:00.664 17:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:08:00.664 17:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:08:00.664 killing process with pid 50245 00:08:00.664 17:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:08:00.664 17:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 50245' 00:08:00.664 17:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 50245 00:08:00.664 [2024-07-15 17:26:56.372359] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:00.664 [2024-07-15 17:26:56.372384] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:00.664 [2024-07-15 17:26:56.372399] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:00.664 [2024-07-15 17:26:56.372403] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c0e17a35180 name raid_bdev1, state offline 00:08:00.664 17:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 50245 00:08:00.664 [2024-07-15 17:26:56.384749] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:00.923 17:26:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:08:00.923 00:08:00.923 real 0m8.843s 00:08:00.923 user 0m15.399s 00:08:00.923 sys 0m1.512s 00:08:00.923 17:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:00.923 17:26:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.924 ************************************ 00:08:00.924 END TEST raid_superblock_test 00:08:00.924 ************************************ 00:08:00.924 17:26:56 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:08:00.924 17:26:56 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:00.924 17:26:56 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:00.924 17:26:56 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.924 17:26:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:00.924 ************************************ 00:08:00.924 START TEST raid_read_error_test 00:08:00.924 ************************************ 00:08:00.924 17:26:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 read 00:08:00.924 17:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:08:00.924 17:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:08:00.924 17:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:08:00.924 17:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:08:00.924 17:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:00.924 17:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:08:00.924 17:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:08:00.924 17:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:00.924 17:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:08:00.924 17:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:08:00.924 17:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:00.924 17:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:00.924 17:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:08:00.924 17:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:08:00.924 17:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:08:00.924 17:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:08:00.924 17:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:08:00.924 17:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:08:00.924 17:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:08:00.924 17:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:08:00.924 17:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:08:00.924 17:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:08:00.924 17:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.MOgnTL77XP 00:08:00.924 17:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=50510 00:08:00.924 17:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 50510 /var/tmp/spdk-raid.sock 00:08:00.924 17:26:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 50510 ']' 00:08:00.924 17:26:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:00.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:00.924 17:26:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:00.924 17:26:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:00.924 17:26:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:00.924 17:26:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:00.924 17:26:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.924 [2024-07-15 17:26:56.650460] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:08:00.924 [2024-07-15 17:26:56.650648] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:01.489 EAL: TSC is not safe to use in SMP mode 00:08:01.489 EAL: TSC is not invariant 00:08:01.489 [2024-07-15 17:26:57.182552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.489 [2024-07-15 17:26:57.267135] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:01.489 [2024-07-15 17:26:57.269236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.489 [2024-07-15 17:26:57.270033] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.489 [2024-07-15 17:26:57.270047] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:02.055 17:26:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:02.055 17:26:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:08:02.055 17:26:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:08:02.055 17:26:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:02.314 BaseBdev1_malloc 00:08:02.314 17:26:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:08:02.572 true 00:08:02.572 17:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:02.572 [2024-07-15 17:26:58.377528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:02.572 [2024-07-15 17:26:58.377607] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.572 [2024-07-15 17:26:58.377663] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f86a9034780 00:08:02.572 [2024-07-15 17:26:58.377671] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.572 [2024-07-15 17:26:58.378389] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.572 [2024-07-15 17:26:58.378415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:02.572 BaseBdev1 00:08:02.572 17:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:08:02.572 17:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:02.830 BaseBdev2_malloc 00:08:03.109 17:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:08:03.109 true 00:08:03.109 17:26:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:03.366 [2024-07-15 17:26:59.145676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:03.366 [2024-07-15 17:26:59.145732] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:03.366 [2024-07-15 17:26:59.145774] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f86a9034c80 00:08:03.366 [2024-07-15 17:26:59.145782] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:03.366 [2024-07-15 17:26:59.146493] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:03.366 [2024-07-15 17:26:59.146519] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:03.366 BaseBdev2 00:08:03.366 17:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:08:03.624 [2024-07-15 17:26:59.421704] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:03.624 [2024-07-15 17:26:59.422343] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:03.624 [2024-07-15 17:26:59.422421] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f86a9034f00 00:08:03.624 [2024-07-15 17:26:59.422427] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:03.624 [2024-07-15 17:26:59.422459] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f86a90a0e20 00:08:03.624 [2024-07-15 17:26:59.422548] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f86a9034f00 00:08:03.624 [2024-07-15 17:26:59.422552] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f86a9034f00 00:08:03.624 [2024-07-15 17:26:59.422594] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:03.624 17:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:03.624 17:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:03.624 17:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:03.624 17:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:03.624 17:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:03.624 17:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:03.624 17:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:03.624 17:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:03.624 17:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:03.624 17:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:03.624 17:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:03.624 17:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:03.882 17:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:03.882 "name": "raid_bdev1", 00:08:03.882 "uuid": "710e8c02-42cf-11ef-96ac-773515fba644", 00:08:03.882 "strip_size_kb": 64, 00:08:03.882 "state": "online", 00:08:03.882 "raid_level": "concat", 00:08:03.882 "superblock": true, 00:08:03.882 "num_base_bdevs": 2, 00:08:03.882 "num_base_bdevs_discovered": 2, 00:08:03.882 "num_base_bdevs_operational": 2, 00:08:03.882 "base_bdevs_list": [ 00:08:03.882 { 00:08:03.882 "name": "BaseBdev1", 00:08:03.882 "uuid": "b0930213-1ff5-2c5e-af34-67a716ca3a10", 00:08:03.882 "is_configured": true, 00:08:03.882 "data_offset": 2048, 00:08:03.882 "data_size": 63488 00:08:03.882 }, 00:08:03.882 { 00:08:03.882 "name": "BaseBdev2", 00:08:03.882 "uuid": "f90167d7-e078-5b58-aaf2-f57003f53bf1", 00:08:03.882 "is_configured": true, 00:08:03.882 "data_offset": 2048, 00:08:03.882 "data_size": 63488 00:08:03.882 } 00:08:03.882 ] 00:08:03.882 }' 00:08:03.882 17:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:03.882 17:26:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:04.448 17:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:08:04.448 17:26:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:08:04.448 [2024-07-15 17:27:00.101972] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f86a90a0ec0 00:08:05.383 17:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:05.641 17:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:08:05.641 17:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:08:05.641 17:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:08:05.641 17:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:05.641 17:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:05.641 17:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:05.641 17:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:05.641 17:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:05.641 17:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:05.641 17:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:05.641 17:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:05.641 17:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:05.641 17:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:05.641 17:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:05.641 17:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:05.899 17:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:05.899 "name": "raid_bdev1", 00:08:05.899 "uuid": "710e8c02-42cf-11ef-96ac-773515fba644", 00:08:05.899 "strip_size_kb": 64, 00:08:05.899 "state": "online", 00:08:05.899 "raid_level": "concat", 00:08:05.899 "superblock": true, 00:08:05.899 "num_base_bdevs": 2, 00:08:05.899 "num_base_bdevs_discovered": 2, 00:08:05.899 "num_base_bdevs_operational": 2, 00:08:05.899 "base_bdevs_list": [ 00:08:05.899 { 00:08:05.899 "name": "BaseBdev1", 00:08:05.899 "uuid": "b0930213-1ff5-2c5e-af34-67a716ca3a10", 00:08:05.899 "is_configured": true, 00:08:05.899 "data_offset": 2048, 00:08:05.899 "data_size": 63488 00:08:05.899 }, 00:08:05.899 { 00:08:05.899 "name": "BaseBdev2", 00:08:05.899 "uuid": "f90167d7-e078-5b58-aaf2-f57003f53bf1", 00:08:05.900 "is_configured": true, 00:08:05.900 "data_offset": 2048, 00:08:05.900 "data_size": 63488 00:08:05.900 } 00:08:05.900 ] 00:08:05.900 }' 00:08:05.900 17:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:05.900 17:27:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.158 17:27:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:06.418 [2024-07-15 17:27:02.227254] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:06.418 [2024-07-15 17:27:02.227282] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:06.418 [2024-07-15 17:27:02.227646] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:06.418 [2024-07-15 17:27:02.227656] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:06.418 [2024-07-15 17:27:02.227678] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:06.418 [2024-07-15 17:27:02.227682] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f86a9034f00 name raid_bdev1, state offline 00:08:06.418 0 00:08:06.418 17:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 50510 00:08:06.418 17:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 50510 ']' 00:08:06.418 17:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 50510 00:08:06.418 17:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:08:06.418 17:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:08:06.418 17:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 50510 00:08:06.678 17:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:08:06.678 17:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:08:06.678 17:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:08:06.678 killing process with pid 50510 00:08:06.678 17:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 50510' 00:08:06.678 17:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 50510 00:08:06.678 [2024-07-15 17:27:02.253953] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:06.678 17:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 50510 00:08:06.678 [2024-07-15 17:27:02.265256] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:06.678 17:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:08:06.678 17:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.MOgnTL77XP 00:08:06.678 17:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:08:06.678 17:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.47 00:08:06.678 17:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:08:06.678 17:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:06.678 17:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:08:06.678 17:27:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.47 != \0\.\0\0 ]] 00:08:06.678 00:08:06.678 real 0m5.818s 00:08:06.678 user 0m8.963s 00:08:06.678 sys 0m0.940s 00:08:06.678 17:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:06.678 ************************************ 00:08:06.678 END TEST raid_read_error_test 00:08:06.678 ************************************ 00:08:06.678 17:27:02 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.678 17:27:02 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:08:06.678 17:27:02 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:06.678 17:27:02 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:06.678 17:27:02 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.678 17:27:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:06.678 ************************************ 00:08:06.678 START TEST raid_write_error_test 00:08:06.678 ************************************ 00:08:06.678 17:27:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 write 00:08:06.678 17:27:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:08:06.678 17:27:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:08:06.678 17:27:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:08:06.678 17:27:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:08:06.678 17:27:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:06.678 17:27:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:08:06.678 17:27:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:08:06.678 17:27:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:06.678 17:27:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:08:06.678 17:27:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:08:06.678 17:27:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:06.679 17:27:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:06.679 17:27:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:08:06.679 17:27:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:08:06.679 17:27:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:08:06.679 17:27:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:08:06.679 17:27:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:08:06.679 17:27:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:08:06.679 17:27:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:08:06.679 17:27:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:08:06.679 17:27:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:08:06.679 17:27:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:08:06.679 17:27:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.4akSQlXWV8 00:08:06.679 17:27:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=50638 00:08:06.679 17:27:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 50638 /var/tmp/spdk-raid.sock 00:08:06.679 17:27:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 50638 ']' 00:08:06.679 17:27:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:06.679 17:27:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:06.679 17:27:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:06.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:06.679 17:27:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:06.679 17:27:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:06.679 17:27:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.974 [2024-07-15 17:27:02.511684] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:08:06.974 [2024-07-15 17:27:02.511853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:07.234 EAL: TSC is not safe to use in SMP mode 00:08:07.234 EAL: TSC is not invariant 00:08:07.234 [2024-07-15 17:27:03.064747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.492 [2024-07-15 17:27:03.153362] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:07.492 [2024-07-15 17:27:03.155802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.492 [2024-07-15 17:27:03.156626] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.492 [2024-07-15 17:27:03.156640] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.750 17:27:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:07.750 17:27:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:08:07.750 17:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:08:07.750 17:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:08.008 BaseBdev1_malloc 00:08:08.008 17:27:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:08:08.266 true 00:08:08.525 17:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:08.525 [2024-07-15 17:27:04.316588] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:08.525 [2024-07-15 17:27:04.316648] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:08.525 [2024-07-15 17:27:04.316677] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x241bce434780 00:08:08.525 [2024-07-15 17:27:04.316686] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:08.525 [2024-07-15 17:27:04.317437] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:08.525 [2024-07-15 17:27:04.317463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:08.525 BaseBdev1 00:08:08.525 17:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:08:08.525 17:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:08.783 BaseBdev2_malloc 00:08:08.783 17:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:08:09.042 true 00:08:09.042 17:27:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:09.301 [2024-07-15 17:27:05.008592] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:09.301 [2024-07-15 17:27:05.008664] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:09.301 [2024-07-15 17:27:05.008709] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x241bce434c80 00:08:09.301 [2024-07-15 17:27:05.008718] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:09.301 [2024-07-15 17:27:05.009441] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:09.301 [2024-07-15 17:27:05.009466] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:09.301 BaseBdev2 00:08:09.301 17:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:08:09.560 [2024-07-15 17:27:05.276615] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:09.560 [2024-07-15 17:27:05.277216] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:09.560 [2024-07-15 17:27:05.277280] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x241bce434f00 00:08:09.560 [2024-07-15 17:27:05.277287] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:09.560 [2024-07-15 17:27:05.277320] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x241bce4a0e20 00:08:09.560 [2024-07-15 17:27:05.277396] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x241bce434f00 00:08:09.560 [2024-07-15 17:27:05.277400] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x241bce434f00 00:08:09.560 [2024-07-15 17:27:05.277428] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.560 17:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:09.560 17:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:09.560 17:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:09.560 17:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:09.560 17:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:09.560 17:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:09.560 17:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:09.560 17:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:09.560 17:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:09.560 17:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:09.560 17:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:09.560 17:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:09.819 17:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:09.819 "name": "raid_bdev1", 00:08:09.819 "uuid": "748bef72-42cf-11ef-96ac-773515fba644", 00:08:09.819 "strip_size_kb": 64, 00:08:09.819 "state": "online", 00:08:09.819 "raid_level": "concat", 00:08:09.819 "superblock": true, 00:08:09.819 "num_base_bdevs": 2, 00:08:09.819 "num_base_bdevs_discovered": 2, 00:08:09.819 "num_base_bdevs_operational": 2, 00:08:09.819 "base_bdevs_list": [ 00:08:09.819 { 00:08:09.819 "name": "BaseBdev1", 00:08:09.819 "uuid": "dc299cd4-7a0a-6d5a-bb4d-0db66e18490c", 00:08:09.819 "is_configured": true, 00:08:09.819 "data_offset": 2048, 00:08:09.819 "data_size": 63488 00:08:09.819 }, 00:08:09.819 { 00:08:09.819 "name": "BaseBdev2", 00:08:09.819 "uuid": "bd5a462e-0339-8b51-9610-9ed6639b2550", 00:08:09.819 "is_configured": true, 00:08:09.819 "data_offset": 2048, 00:08:09.819 "data_size": 63488 00:08:09.819 } 00:08:09.819 ] 00:08:09.819 }' 00:08:09.819 17:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:09.819 17:27:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:10.385 17:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:08:10.385 17:27:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:08:10.385 [2024-07-15 17:27:06.048815] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x241bce4a0ec0 00:08:11.320 17:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:11.579 17:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:08:11.579 17:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:08:11.579 17:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:08:11.579 17:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:11.579 17:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:11.579 17:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:11.579 17:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:11.579 17:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:11.579 17:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:11.579 17:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:11.579 17:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:11.579 17:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:11.579 17:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:11.579 17:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:11.579 17:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:11.837 17:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:11.837 "name": "raid_bdev1", 00:08:11.837 "uuid": "748bef72-42cf-11ef-96ac-773515fba644", 00:08:11.837 "strip_size_kb": 64, 00:08:11.837 "state": "online", 00:08:11.837 "raid_level": "concat", 00:08:11.837 "superblock": true, 00:08:11.837 "num_base_bdevs": 2, 00:08:11.837 "num_base_bdevs_discovered": 2, 00:08:11.837 "num_base_bdevs_operational": 2, 00:08:11.837 "base_bdevs_list": [ 00:08:11.837 { 00:08:11.837 "name": "BaseBdev1", 00:08:11.837 "uuid": "dc299cd4-7a0a-6d5a-bb4d-0db66e18490c", 00:08:11.837 "is_configured": true, 00:08:11.837 "data_offset": 2048, 00:08:11.837 "data_size": 63488 00:08:11.837 }, 00:08:11.837 { 00:08:11.837 "name": "BaseBdev2", 00:08:11.837 "uuid": "bd5a462e-0339-8b51-9610-9ed6639b2550", 00:08:11.837 "is_configured": true, 00:08:11.837 "data_offset": 2048, 00:08:11.837 "data_size": 63488 00:08:11.837 } 00:08:11.837 ] 00:08:11.837 }' 00:08:11.837 17:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:11.837 17:27:07 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.096 17:27:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:12.354 [2024-07-15 17:27:08.121994] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:12.354 [2024-07-15 17:27:08.122023] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:12.354 [2024-07-15 17:27:08.122388] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:12.354 [2024-07-15 17:27:08.122397] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:12.354 [2024-07-15 17:27:08.122404] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:12.354 [2024-07-15 17:27:08.122408] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x241bce434f00 name raid_bdev1, state offline 00:08:12.354 0 00:08:12.354 17:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 50638 00:08:12.354 17:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 50638 ']' 00:08:12.354 17:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 50638 00:08:12.354 17:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:08:12.354 17:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:08:12.354 17:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 50638 00:08:12.354 17:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:08:12.354 17:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:08:12.354 17:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:08:12.354 killing process with pid 50638 00:08:12.354 17:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 50638' 00:08:12.354 17:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 50638 00:08:12.354 [2024-07-15 17:27:08.149079] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:12.354 17:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 50638 00:08:12.354 [2024-07-15 17:27:08.160198] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:12.614 17:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.4akSQlXWV8 00:08:12.614 17:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:08:12.614 17:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:08:12.614 17:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.48 00:08:12.614 17:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:08:12.614 17:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:12.614 17:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:08:12.614 17:27:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.48 != \0\.\0\0 ]] 00:08:12.614 00:08:12.614 real 0m5.847s 00:08:12.614 user 0m8.863s 00:08:12.614 sys 0m1.116s 00:08:12.614 17:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.614 17:27:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.614 ************************************ 00:08:12.614 END TEST raid_write_error_test 00:08:12.614 ************************************ 00:08:12.614 17:27:08 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:08:12.614 17:27:08 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:08:12.614 17:27:08 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:12.614 17:27:08 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:12.614 17:27:08 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:12.614 17:27:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:12.614 ************************************ 00:08:12.614 START TEST raid_state_function_test 00:08:12.614 ************************************ 00:08:12.614 17:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 false 00:08:12.614 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:08:12.614 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:08:12.614 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:08:12.614 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:08:12.614 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:08:12.614 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:12.614 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:08:12.614 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:12.614 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:12.614 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:08:12.614 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:12.614 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:12.614 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:12.614 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:08:12.614 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:08:12.614 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:08:12.614 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:08:12.614 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:08:12.614 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:08:12.614 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:08:12.614 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:08:12.614 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:08:12.614 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=50764 00:08:12.614 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 50764' 00:08:12.614 Process raid pid: 50764 00:08:12.614 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:12.614 17:27:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 50764 /var/tmp/spdk-raid.sock 00:08:12.614 17:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 50764 ']' 00:08:12.614 17:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:12.614 17:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:12.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:12.614 17:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:12.614 17:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:12.614 17:27:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.614 [2024-07-15 17:27:08.404667] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:08:12.614 [2024-07-15 17:27:08.404890] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:13.181 EAL: TSC is not safe to use in SMP mode 00:08:13.181 EAL: TSC is not invariant 00:08:13.181 [2024-07-15 17:27:08.986998] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.439 [2024-07-15 17:27:09.078169] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:13.439 [2024-07-15 17:27:09.080244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.439 [2024-07-15 17:27:09.081023] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.439 [2024-07-15 17:27:09.081038] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:13.697 17:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:13.697 17:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:08:13.697 17:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:13.956 [2024-07-15 17:27:09.689334] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:13.956 [2024-07-15 17:27:09.689386] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:13.956 [2024-07-15 17:27:09.689392] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:13.956 [2024-07-15 17:27:09.689401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:13.956 17:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:13.956 17:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:13.956 17:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:13.956 17:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:13.956 17:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:13.956 17:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:13.956 17:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:13.956 17:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:13.956 17:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:13.956 17:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:13.956 17:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:13.956 17:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:14.215 17:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:14.215 "name": "Existed_Raid", 00:08:14.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.215 "strip_size_kb": 0, 00:08:14.215 "state": "configuring", 00:08:14.215 "raid_level": "raid1", 00:08:14.215 "superblock": false, 00:08:14.215 "num_base_bdevs": 2, 00:08:14.215 "num_base_bdevs_discovered": 0, 00:08:14.215 "num_base_bdevs_operational": 2, 00:08:14.215 "base_bdevs_list": [ 00:08:14.215 { 00:08:14.215 "name": "BaseBdev1", 00:08:14.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.215 "is_configured": false, 00:08:14.215 "data_offset": 0, 00:08:14.215 "data_size": 0 00:08:14.215 }, 00:08:14.215 { 00:08:14.215 "name": "BaseBdev2", 00:08:14.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:14.215 "is_configured": false, 00:08:14.215 "data_offset": 0, 00:08:14.215 "data_size": 0 00:08:14.215 } 00:08:14.215 ] 00:08:14.215 }' 00:08:14.215 17:27:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:14.215 17:27:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.515 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:14.788 [2024-07-15 17:27:10.485345] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:14.788 [2024-07-15 17:27:10.485374] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3f6252c34500 name Existed_Raid, state configuring 00:08:14.788 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:15.047 [2024-07-15 17:27:10.761343] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:15.047 [2024-07-15 17:27:10.761403] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:15.047 [2024-07-15 17:27:10.761409] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:15.047 [2024-07-15 17:27:10.761434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:15.047 17:27:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:15.306 [2024-07-15 17:27:10.998332] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:15.306 BaseBdev1 00:08:15.306 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:08:15.306 17:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:08:15.306 17:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:15.306 17:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:08:15.306 17:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:15.306 17:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:15.306 17:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:15.564 17:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:15.822 [ 00:08:15.822 { 00:08:15.822 "name": "BaseBdev1", 00:08:15.822 "aliases": [ 00:08:15.822 "77f4db5e-42cf-11ef-96ac-773515fba644" 00:08:15.822 ], 00:08:15.822 "product_name": "Malloc disk", 00:08:15.822 "block_size": 512, 00:08:15.822 "num_blocks": 65536, 00:08:15.822 "uuid": "77f4db5e-42cf-11ef-96ac-773515fba644", 00:08:15.822 "assigned_rate_limits": { 00:08:15.822 "rw_ios_per_sec": 0, 00:08:15.822 "rw_mbytes_per_sec": 0, 00:08:15.822 "r_mbytes_per_sec": 0, 00:08:15.822 "w_mbytes_per_sec": 0 00:08:15.822 }, 00:08:15.822 "claimed": true, 00:08:15.822 "claim_type": "exclusive_write", 00:08:15.822 "zoned": false, 00:08:15.822 "supported_io_types": { 00:08:15.822 "read": true, 00:08:15.822 "write": true, 00:08:15.822 "unmap": true, 00:08:15.822 "flush": true, 00:08:15.822 "reset": true, 00:08:15.822 "nvme_admin": false, 00:08:15.822 "nvme_io": false, 00:08:15.822 "nvme_io_md": false, 00:08:15.822 "write_zeroes": true, 00:08:15.822 "zcopy": true, 00:08:15.822 "get_zone_info": false, 00:08:15.822 "zone_management": false, 00:08:15.822 "zone_append": false, 00:08:15.822 "compare": false, 00:08:15.822 "compare_and_write": false, 00:08:15.822 "abort": true, 00:08:15.822 "seek_hole": false, 00:08:15.822 "seek_data": false, 00:08:15.822 "copy": true, 00:08:15.822 "nvme_iov_md": false 00:08:15.822 }, 00:08:15.822 "memory_domains": [ 00:08:15.823 { 00:08:15.823 "dma_device_id": "system", 00:08:15.823 "dma_device_type": 1 00:08:15.823 }, 00:08:15.823 { 00:08:15.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.823 "dma_device_type": 2 00:08:15.823 } 00:08:15.823 ], 00:08:15.823 "driver_specific": {} 00:08:15.823 } 00:08:15.823 ] 00:08:15.823 17:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:08:15.823 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:15.823 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:15.823 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:15.823 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:15.823 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:15.823 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:15.823 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:15.823 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:15.823 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:15.823 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:15.823 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:15.823 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.081 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:16.081 "name": "Existed_Raid", 00:08:16.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.081 "strip_size_kb": 0, 00:08:16.081 "state": "configuring", 00:08:16.081 "raid_level": "raid1", 00:08:16.081 "superblock": false, 00:08:16.081 "num_base_bdevs": 2, 00:08:16.081 "num_base_bdevs_discovered": 1, 00:08:16.081 "num_base_bdevs_operational": 2, 00:08:16.081 "base_bdevs_list": [ 00:08:16.081 { 00:08:16.081 "name": "BaseBdev1", 00:08:16.081 "uuid": "77f4db5e-42cf-11ef-96ac-773515fba644", 00:08:16.081 "is_configured": true, 00:08:16.081 "data_offset": 0, 00:08:16.081 "data_size": 65536 00:08:16.081 }, 00:08:16.081 { 00:08:16.081 "name": "BaseBdev2", 00:08:16.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.081 "is_configured": false, 00:08:16.081 "data_offset": 0, 00:08:16.081 "data_size": 0 00:08:16.081 } 00:08:16.081 ] 00:08:16.081 }' 00:08:16.081 17:27:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:16.081 17:27:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.340 17:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:16.907 [2024-07-15 17:27:12.433439] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:16.907 [2024-07-15 17:27:12.433484] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3f6252c34500 name Existed_Raid, state configuring 00:08:16.907 17:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:16.907 [2024-07-15 17:27:12.725484] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:16.907 [2024-07-15 17:27:12.726360] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:16.907 [2024-07-15 17:27:12.726402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:17.165 17:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:08:17.165 17:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:17.165 17:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:17.165 17:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:17.165 17:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:17.165 17:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:17.165 17:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:17.165 17:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:17.165 17:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:17.165 17:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:17.165 17:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:17.165 17:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:17.165 17:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:17.165 17:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.165 17:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:17.165 "name": "Existed_Raid", 00:08:17.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.165 "strip_size_kb": 0, 00:08:17.165 "state": "configuring", 00:08:17.165 "raid_level": "raid1", 00:08:17.165 "superblock": false, 00:08:17.165 "num_base_bdevs": 2, 00:08:17.165 "num_base_bdevs_discovered": 1, 00:08:17.165 "num_base_bdevs_operational": 2, 00:08:17.165 "base_bdevs_list": [ 00:08:17.165 { 00:08:17.165 "name": "BaseBdev1", 00:08:17.165 "uuid": "77f4db5e-42cf-11ef-96ac-773515fba644", 00:08:17.165 "is_configured": true, 00:08:17.165 "data_offset": 0, 00:08:17.165 "data_size": 65536 00:08:17.165 }, 00:08:17.165 { 00:08:17.165 "name": "BaseBdev2", 00:08:17.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:17.165 "is_configured": false, 00:08:17.165 "data_offset": 0, 00:08:17.165 "data_size": 0 00:08:17.165 } 00:08:17.165 ] 00:08:17.165 }' 00:08:17.165 17:27:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:17.165 17:27:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.731 17:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:17.731 [2024-07-15 17:27:13.549638] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:17.731 [2024-07-15 17:27:13.549680] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3f6252c34a00 00:08:17.731 [2024-07-15 17:27:13.549685] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:17.731 [2024-07-15 17:27:13.549708] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3f6252c97e20 00:08:17.731 [2024-07-15 17:27:13.549809] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3f6252c34a00 00:08:17.731 [2024-07-15 17:27:13.549814] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3f6252c34a00 00:08:17.731 [2024-07-15 17:27:13.549847] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:17.731 BaseBdev2 00:08:17.989 17:27:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:08:17.989 17:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:08:17.989 17:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:17.989 17:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:08:17.989 17:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:17.989 17:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:17.989 17:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:18.253 17:27:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:18.253 [ 00:08:18.253 { 00:08:18.253 "name": "BaseBdev2", 00:08:18.253 "aliases": [ 00:08:18.253 "797a4777-42cf-11ef-96ac-773515fba644" 00:08:18.253 ], 00:08:18.253 "product_name": "Malloc disk", 00:08:18.253 "block_size": 512, 00:08:18.253 "num_blocks": 65536, 00:08:18.253 "uuid": "797a4777-42cf-11ef-96ac-773515fba644", 00:08:18.253 "assigned_rate_limits": { 00:08:18.253 "rw_ios_per_sec": 0, 00:08:18.253 "rw_mbytes_per_sec": 0, 00:08:18.253 "r_mbytes_per_sec": 0, 00:08:18.253 "w_mbytes_per_sec": 0 00:08:18.253 }, 00:08:18.253 "claimed": true, 00:08:18.253 "claim_type": "exclusive_write", 00:08:18.253 "zoned": false, 00:08:18.253 "supported_io_types": { 00:08:18.253 "read": true, 00:08:18.253 "write": true, 00:08:18.253 "unmap": true, 00:08:18.253 "flush": true, 00:08:18.253 "reset": true, 00:08:18.253 "nvme_admin": false, 00:08:18.253 "nvme_io": false, 00:08:18.253 "nvme_io_md": false, 00:08:18.253 "write_zeroes": true, 00:08:18.253 "zcopy": true, 00:08:18.253 "get_zone_info": false, 00:08:18.253 "zone_management": false, 00:08:18.253 "zone_append": false, 00:08:18.253 "compare": false, 00:08:18.253 "compare_and_write": false, 00:08:18.253 "abort": true, 00:08:18.253 "seek_hole": false, 00:08:18.253 "seek_data": false, 00:08:18.253 "copy": true, 00:08:18.253 "nvme_iov_md": false 00:08:18.253 }, 00:08:18.253 "memory_domains": [ 00:08:18.253 { 00:08:18.253 "dma_device_id": "system", 00:08:18.253 "dma_device_type": 1 00:08:18.253 }, 00:08:18.253 { 00:08:18.253 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:18.253 "dma_device_type": 2 00:08:18.253 } 00:08:18.253 ], 00:08:18.253 "driver_specific": {} 00:08:18.253 } 00:08:18.253 ] 00:08:18.253 17:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:08:18.253 17:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:08:18.253 17:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:18.253 17:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:18.253 17:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:18.253 17:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:18.253 17:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:18.253 17:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:18.253 17:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:18.253 17:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:18.253 17:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:18.253 17:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:18.253 17:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:18.253 17:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:18.253 17:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.818 17:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:18.818 "name": "Existed_Raid", 00:08:18.818 "uuid": "797a4ec8-42cf-11ef-96ac-773515fba644", 00:08:18.818 "strip_size_kb": 0, 00:08:18.818 "state": "online", 00:08:18.818 "raid_level": "raid1", 00:08:18.818 "superblock": false, 00:08:18.818 "num_base_bdevs": 2, 00:08:18.818 "num_base_bdevs_discovered": 2, 00:08:18.818 "num_base_bdevs_operational": 2, 00:08:18.818 "base_bdevs_list": [ 00:08:18.818 { 00:08:18.818 "name": "BaseBdev1", 00:08:18.818 "uuid": "77f4db5e-42cf-11ef-96ac-773515fba644", 00:08:18.818 "is_configured": true, 00:08:18.818 "data_offset": 0, 00:08:18.818 "data_size": 65536 00:08:18.818 }, 00:08:18.818 { 00:08:18.818 "name": "BaseBdev2", 00:08:18.818 "uuid": "797a4777-42cf-11ef-96ac-773515fba644", 00:08:18.818 "is_configured": true, 00:08:18.818 "data_offset": 0, 00:08:18.818 "data_size": 65536 00:08:18.818 } 00:08:18.818 ] 00:08:18.818 }' 00:08:18.818 17:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:18.818 17:27:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:19.078 17:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:08:19.078 17:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:08:19.078 17:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:19.078 17:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:19.078 17:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:19.078 17:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:19.078 17:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:08:19.078 17:27:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:19.337 [2024-07-15 17:27:14.989663] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:19.337 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:19.337 "name": "Existed_Raid", 00:08:19.337 "aliases": [ 00:08:19.337 "797a4ec8-42cf-11ef-96ac-773515fba644" 00:08:19.337 ], 00:08:19.337 "product_name": "Raid Volume", 00:08:19.337 "block_size": 512, 00:08:19.337 "num_blocks": 65536, 00:08:19.337 "uuid": "797a4ec8-42cf-11ef-96ac-773515fba644", 00:08:19.337 "assigned_rate_limits": { 00:08:19.337 "rw_ios_per_sec": 0, 00:08:19.337 "rw_mbytes_per_sec": 0, 00:08:19.337 "r_mbytes_per_sec": 0, 00:08:19.337 "w_mbytes_per_sec": 0 00:08:19.337 }, 00:08:19.337 "claimed": false, 00:08:19.337 "zoned": false, 00:08:19.337 "supported_io_types": { 00:08:19.337 "read": true, 00:08:19.337 "write": true, 00:08:19.337 "unmap": false, 00:08:19.337 "flush": false, 00:08:19.337 "reset": true, 00:08:19.337 "nvme_admin": false, 00:08:19.337 "nvme_io": false, 00:08:19.337 "nvme_io_md": false, 00:08:19.337 "write_zeroes": true, 00:08:19.337 "zcopy": false, 00:08:19.337 "get_zone_info": false, 00:08:19.337 "zone_management": false, 00:08:19.337 "zone_append": false, 00:08:19.337 "compare": false, 00:08:19.337 "compare_and_write": false, 00:08:19.337 "abort": false, 00:08:19.337 "seek_hole": false, 00:08:19.337 "seek_data": false, 00:08:19.337 "copy": false, 00:08:19.337 "nvme_iov_md": false 00:08:19.337 }, 00:08:19.337 "memory_domains": [ 00:08:19.337 { 00:08:19.337 "dma_device_id": "system", 00:08:19.337 "dma_device_type": 1 00:08:19.337 }, 00:08:19.337 { 00:08:19.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.337 "dma_device_type": 2 00:08:19.337 }, 00:08:19.337 { 00:08:19.337 "dma_device_id": "system", 00:08:19.337 "dma_device_type": 1 00:08:19.337 }, 00:08:19.337 { 00:08:19.337 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.337 "dma_device_type": 2 00:08:19.337 } 00:08:19.337 ], 00:08:19.337 "driver_specific": { 00:08:19.337 "raid": { 00:08:19.337 "uuid": "797a4ec8-42cf-11ef-96ac-773515fba644", 00:08:19.337 "strip_size_kb": 0, 00:08:19.337 "state": "online", 00:08:19.337 "raid_level": "raid1", 00:08:19.337 "superblock": false, 00:08:19.337 "num_base_bdevs": 2, 00:08:19.337 "num_base_bdevs_discovered": 2, 00:08:19.337 "num_base_bdevs_operational": 2, 00:08:19.337 "base_bdevs_list": [ 00:08:19.337 { 00:08:19.337 "name": "BaseBdev1", 00:08:19.337 "uuid": "77f4db5e-42cf-11ef-96ac-773515fba644", 00:08:19.337 "is_configured": true, 00:08:19.337 "data_offset": 0, 00:08:19.337 "data_size": 65536 00:08:19.337 }, 00:08:19.337 { 00:08:19.337 "name": "BaseBdev2", 00:08:19.337 "uuid": "797a4777-42cf-11ef-96ac-773515fba644", 00:08:19.337 "is_configured": true, 00:08:19.337 "data_offset": 0, 00:08:19.337 "data_size": 65536 00:08:19.337 } 00:08:19.337 ] 00:08:19.337 } 00:08:19.337 } 00:08:19.337 }' 00:08:19.337 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:19.337 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:08:19.337 BaseBdev2' 00:08:19.337 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:19.337 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:08:19.337 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:19.595 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:19.595 "name": "BaseBdev1", 00:08:19.595 "aliases": [ 00:08:19.595 "77f4db5e-42cf-11ef-96ac-773515fba644" 00:08:19.595 ], 00:08:19.595 "product_name": "Malloc disk", 00:08:19.595 "block_size": 512, 00:08:19.595 "num_blocks": 65536, 00:08:19.595 "uuid": "77f4db5e-42cf-11ef-96ac-773515fba644", 00:08:19.595 "assigned_rate_limits": { 00:08:19.595 "rw_ios_per_sec": 0, 00:08:19.595 "rw_mbytes_per_sec": 0, 00:08:19.595 "r_mbytes_per_sec": 0, 00:08:19.595 "w_mbytes_per_sec": 0 00:08:19.595 }, 00:08:19.595 "claimed": true, 00:08:19.595 "claim_type": "exclusive_write", 00:08:19.595 "zoned": false, 00:08:19.595 "supported_io_types": { 00:08:19.595 "read": true, 00:08:19.595 "write": true, 00:08:19.595 "unmap": true, 00:08:19.595 "flush": true, 00:08:19.595 "reset": true, 00:08:19.595 "nvme_admin": false, 00:08:19.595 "nvme_io": false, 00:08:19.595 "nvme_io_md": false, 00:08:19.595 "write_zeroes": true, 00:08:19.595 "zcopy": true, 00:08:19.595 "get_zone_info": false, 00:08:19.595 "zone_management": false, 00:08:19.595 "zone_append": false, 00:08:19.595 "compare": false, 00:08:19.595 "compare_and_write": false, 00:08:19.595 "abort": true, 00:08:19.595 "seek_hole": false, 00:08:19.595 "seek_data": false, 00:08:19.595 "copy": true, 00:08:19.595 "nvme_iov_md": false 00:08:19.595 }, 00:08:19.595 "memory_domains": [ 00:08:19.595 { 00:08:19.595 "dma_device_id": "system", 00:08:19.595 "dma_device_type": 1 00:08:19.595 }, 00:08:19.595 { 00:08:19.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.595 "dma_device_type": 2 00:08:19.595 } 00:08:19.595 ], 00:08:19.596 "driver_specific": {} 00:08:19.596 }' 00:08:19.596 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:19.596 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:19.596 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:19.596 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:19.596 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:19.596 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:19.596 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:19.596 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:19.596 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:19.596 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:19.596 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:19.596 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:19.596 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:19.596 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:08:19.596 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:19.854 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:19.854 "name": "BaseBdev2", 00:08:19.854 "aliases": [ 00:08:19.854 "797a4777-42cf-11ef-96ac-773515fba644" 00:08:19.854 ], 00:08:19.854 "product_name": "Malloc disk", 00:08:19.854 "block_size": 512, 00:08:19.854 "num_blocks": 65536, 00:08:19.854 "uuid": "797a4777-42cf-11ef-96ac-773515fba644", 00:08:19.854 "assigned_rate_limits": { 00:08:19.854 "rw_ios_per_sec": 0, 00:08:19.854 "rw_mbytes_per_sec": 0, 00:08:19.854 "r_mbytes_per_sec": 0, 00:08:19.854 "w_mbytes_per_sec": 0 00:08:19.854 }, 00:08:19.854 "claimed": true, 00:08:19.854 "claim_type": "exclusive_write", 00:08:19.854 "zoned": false, 00:08:19.854 "supported_io_types": { 00:08:19.854 "read": true, 00:08:19.854 "write": true, 00:08:19.854 "unmap": true, 00:08:19.854 "flush": true, 00:08:19.854 "reset": true, 00:08:19.854 "nvme_admin": false, 00:08:19.854 "nvme_io": false, 00:08:19.854 "nvme_io_md": false, 00:08:19.854 "write_zeroes": true, 00:08:19.854 "zcopy": true, 00:08:19.854 "get_zone_info": false, 00:08:19.854 "zone_management": false, 00:08:19.854 "zone_append": false, 00:08:19.854 "compare": false, 00:08:19.854 "compare_and_write": false, 00:08:19.854 "abort": true, 00:08:19.854 "seek_hole": false, 00:08:19.854 "seek_data": false, 00:08:19.854 "copy": true, 00:08:19.854 "nvme_iov_md": false 00:08:19.854 }, 00:08:19.854 "memory_domains": [ 00:08:19.854 { 00:08:19.854 "dma_device_id": "system", 00:08:19.854 "dma_device_type": 1 00:08:19.854 }, 00:08:19.854 { 00:08:19.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:19.854 "dma_device_type": 2 00:08:19.854 } 00:08:19.854 ], 00:08:19.854 "driver_specific": {} 00:08:19.854 }' 00:08:19.854 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:19.854 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:19.854 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:19.854 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:19.854 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:19.854 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:19.854 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:19.854 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:19.854 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:19.854 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:19.854 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:19.854 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:19.854 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:20.113 [2024-07-15 17:27:15.889735] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:20.113 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:08:20.113 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:08:20.113 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:20.113 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:08:20.113 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:08:20.113 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:20.114 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:20.114 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:20.114 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:20.114 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:20.114 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:20.114 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:20.114 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:20.114 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:20.114 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:20.114 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:20.114 17:27:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:20.372 17:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:20.372 "name": "Existed_Raid", 00:08:20.372 "uuid": "797a4ec8-42cf-11ef-96ac-773515fba644", 00:08:20.372 "strip_size_kb": 0, 00:08:20.372 "state": "online", 00:08:20.372 "raid_level": "raid1", 00:08:20.372 "superblock": false, 00:08:20.372 "num_base_bdevs": 2, 00:08:20.372 "num_base_bdevs_discovered": 1, 00:08:20.372 "num_base_bdevs_operational": 1, 00:08:20.372 "base_bdevs_list": [ 00:08:20.372 { 00:08:20.372 "name": null, 00:08:20.372 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:20.372 "is_configured": false, 00:08:20.372 "data_offset": 0, 00:08:20.372 "data_size": 65536 00:08:20.372 }, 00:08:20.372 { 00:08:20.372 "name": "BaseBdev2", 00:08:20.372 "uuid": "797a4777-42cf-11ef-96ac-773515fba644", 00:08:20.372 "is_configured": true, 00:08:20.372 "data_offset": 0, 00:08:20.372 "data_size": 65536 00:08:20.372 } 00:08:20.372 ] 00:08:20.372 }' 00:08:20.372 17:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:20.372 17:27:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.937 17:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:08:20.937 17:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:20.937 17:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:20.937 17:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:08:21.204 17:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:08:21.204 17:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:21.204 17:27:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:21.204 [2024-07-15 17:27:17.008226] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:21.204 [2024-07-15 17:27:17.008276] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:21.204 [2024-07-15 17:27:17.014584] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:21.204 [2024-07-15 17:27:17.014601] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:21.204 [2024-07-15 17:27:17.014605] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3f6252c34a00 name Existed_Raid, state offline 00:08:21.204 17:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:08:21.204 17:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:21.204 17:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:21.204 17:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:08:21.488 17:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:08:21.488 17:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:08:21.488 17:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:08:21.488 17:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 50764 00:08:21.488 17:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 50764 ']' 00:08:21.488 17:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 50764 00:08:21.488 17:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:08:21.488 17:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:08:21.488 17:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:08:21.488 17:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 50764 00:08:21.489 17:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:08:21.489 killing process with pid 50764 00:08:21.489 17:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:08:21.489 17:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 50764' 00:08:21.489 17:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 50764 00:08:21.489 [2024-07-15 17:27:17.265260] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:21.489 [2024-07-15 17:27:17.265294] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:21.489 17:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 50764 00:08:21.746 17:27:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:08:21.746 ************************************ 00:08:21.746 END TEST raid_state_function_test 00:08:21.746 ************************************ 00:08:21.746 00:08:21.746 real 0m9.054s 00:08:21.746 user 0m15.705s 00:08:21.746 sys 0m1.666s 00:08:21.746 17:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:21.746 17:27:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.746 17:27:17 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:08:21.746 17:27:17 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:21.746 17:27:17 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:21.746 17:27:17 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:21.746 17:27:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:21.746 ************************************ 00:08:21.746 START TEST raid_state_function_test_sb 00:08:21.746 ************************************ 00:08:21.746 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:08:21.746 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:08:21.746 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:08:21.746 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:08:21.746 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:08:21.746 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:08:21.746 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:21.746 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:08:21.746 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:21.746 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:21.746 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:08:21.746 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:21.746 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:21.746 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:21.746 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:08:21.746 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:08:21.746 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:08:21.746 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:08:21.746 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:08:21.746 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:08:21.746 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:08:21.746 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:08:21.746 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:08:21.746 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=51035 00:08:21.746 Process raid pid: 51035 00:08:21.746 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 51035' 00:08:21.746 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 51035 /var/tmp/spdk-raid.sock 00:08:21.746 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 51035 ']' 00:08:21.746 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:21.746 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:21.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:21.746 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:21.746 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:21.746 17:27:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:21.746 17:27:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.746 [2024-07-15 17:27:17.507455] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:08:21.746 [2024-07-15 17:27:17.507698] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:22.312 EAL: TSC is not safe to use in SMP mode 00:08:22.312 EAL: TSC is not invariant 00:08:22.312 [2024-07-15 17:27:18.037736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.312 [2024-07-15 17:27:18.121484] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:22.312 [2024-07-15 17:27:18.123698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.312 [2024-07-15 17:27:18.124544] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.312 [2024-07-15 17:27:18.124559] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.877 17:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:22.877 17:27:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:08:22.877 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:23.135 [2024-07-15 17:27:18.756350] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:23.135 [2024-07-15 17:27:18.756429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:23.135 [2024-07-15 17:27:18.756459] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:23.135 [2024-07-15 17:27:18.756484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:23.135 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:23.135 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:23.135 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:23.135 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:23.135 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:23.135 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:23.135 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:23.135 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:23.135 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:23.135 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:23.135 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:23.135 17:27:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.393 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:23.393 "name": "Existed_Raid", 00:08:23.393 "uuid": "7c94c7b6-42cf-11ef-96ac-773515fba644", 00:08:23.393 "strip_size_kb": 0, 00:08:23.393 "state": "configuring", 00:08:23.393 "raid_level": "raid1", 00:08:23.393 "superblock": true, 00:08:23.393 "num_base_bdevs": 2, 00:08:23.393 "num_base_bdevs_discovered": 0, 00:08:23.393 "num_base_bdevs_operational": 2, 00:08:23.393 "base_bdevs_list": [ 00:08:23.393 { 00:08:23.393 "name": "BaseBdev1", 00:08:23.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.393 "is_configured": false, 00:08:23.393 "data_offset": 0, 00:08:23.393 "data_size": 0 00:08:23.393 }, 00:08:23.393 { 00:08:23.393 "name": "BaseBdev2", 00:08:23.393 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.393 "is_configured": false, 00:08:23.393 "data_offset": 0, 00:08:23.393 "data_size": 0 00:08:23.393 } 00:08:23.393 ] 00:08:23.393 }' 00:08:23.393 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:23.393 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.651 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:23.910 [2024-07-15 17:27:19.524453] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:23.910 [2024-07-15 17:27:19.524500] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x303c3b634500 name Existed_Raid, state configuring 00:08:23.910 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:24.168 [2024-07-15 17:27:19.752511] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:24.168 [2024-07-15 17:27:19.752573] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:24.168 [2024-07-15 17:27:19.752579] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:24.169 [2024-07-15 17:27:19.752587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:24.169 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:24.169 [2024-07-15 17:27:19.981575] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:24.169 BaseBdev1 00:08:24.169 17:27:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:08:24.169 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:08:24.169 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:24.169 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:08:24.169 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:24.169 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:24.169 17:27:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:24.426 17:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:24.685 [ 00:08:24.685 { 00:08:24.685 "name": "BaseBdev1", 00:08:24.685 "aliases": [ 00:08:24.685 "7d4f9488-42cf-11ef-96ac-773515fba644" 00:08:24.685 ], 00:08:24.685 "product_name": "Malloc disk", 00:08:24.685 "block_size": 512, 00:08:24.685 "num_blocks": 65536, 00:08:24.685 "uuid": "7d4f9488-42cf-11ef-96ac-773515fba644", 00:08:24.685 "assigned_rate_limits": { 00:08:24.685 "rw_ios_per_sec": 0, 00:08:24.685 "rw_mbytes_per_sec": 0, 00:08:24.685 "r_mbytes_per_sec": 0, 00:08:24.685 "w_mbytes_per_sec": 0 00:08:24.685 }, 00:08:24.685 "claimed": true, 00:08:24.685 "claim_type": "exclusive_write", 00:08:24.685 "zoned": false, 00:08:24.685 "supported_io_types": { 00:08:24.685 "read": true, 00:08:24.685 "write": true, 00:08:24.685 "unmap": true, 00:08:24.685 "flush": true, 00:08:24.685 "reset": true, 00:08:24.685 "nvme_admin": false, 00:08:24.685 "nvme_io": false, 00:08:24.685 "nvme_io_md": false, 00:08:24.685 "write_zeroes": true, 00:08:24.685 "zcopy": true, 00:08:24.685 "get_zone_info": false, 00:08:24.685 "zone_management": false, 00:08:24.685 "zone_append": false, 00:08:24.685 "compare": false, 00:08:24.685 "compare_and_write": false, 00:08:24.685 "abort": true, 00:08:24.685 "seek_hole": false, 00:08:24.685 "seek_data": false, 00:08:24.685 "copy": true, 00:08:24.685 "nvme_iov_md": false 00:08:24.685 }, 00:08:24.685 "memory_domains": [ 00:08:24.685 { 00:08:24.685 "dma_device_id": "system", 00:08:24.685 "dma_device_type": 1 00:08:24.685 }, 00:08:24.685 { 00:08:24.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.685 "dma_device_type": 2 00:08:24.685 } 00:08:24.685 ], 00:08:24.685 "driver_specific": {} 00:08:24.685 } 00:08:24.685 ] 00:08:24.685 17:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:08:24.685 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:24.685 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:24.685 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:24.685 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:24.685 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:24.685 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:24.685 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:24.685 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:24.685 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:24.685 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:24.685 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:24.685 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:25.271 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:25.271 "name": "Existed_Raid", 00:08:25.271 "uuid": "7d2cc836-42cf-11ef-96ac-773515fba644", 00:08:25.271 "strip_size_kb": 0, 00:08:25.271 "state": "configuring", 00:08:25.271 "raid_level": "raid1", 00:08:25.271 "superblock": true, 00:08:25.271 "num_base_bdevs": 2, 00:08:25.271 "num_base_bdevs_discovered": 1, 00:08:25.271 "num_base_bdevs_operational": 2, 00:08:25.271 "base_bdevs_list": [ 00:08:25.271 { 00:08:25.271 "name": "BaseBdev1", 00:08:25.271 "uuid": "7d4f9488-42cf-11ef-96ac-773515fba644", 00:08:25.271 "is_configured": true, 00:08:25.271 "data_offset": 2048, 00:08:25.271 "data_size": 63488 00:08:25.271 }, 00:08:25.271 { 00:08:25.271 "name": "BaseBdev2", 00:08:25.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:25.271 "is_configured": false, 00:08:25.271 "data_offset": 0, 00:08:25.271 "data_size": 0 00:08:25.271 } 00:08:25.271 ] 00:08:25.271 }' 00:08:25.271 17:27:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:25.271 17:27:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.271 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:25.530 [2024-07-15 17:27:21.348608] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:25.530 [2024-07-15 17:27:21.348642] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x303c3b634500 name Existed_Raid, state configuring 00:08:25.789 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:25.789 [2024-07-15 17:27:21.572626] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:25.789 [2024-07-15 17:27:21.573578] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:25.789 [2024-07-15 17:27:21.573631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:25.789 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:08:25.789 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:25.789 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:25.789 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:25.789 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:25.789 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:25.789 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:25.789 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:25.789 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:25.789 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:25.789 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:25.789 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:25.789 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:25.789 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:26.047 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:26.047 "name": "Existed_Raid", 00:08:26.047 "uuid": "7e42825e-42cf-11ef-96ac-773515fba644", 00:08:26.047 "strip_size_kb": 0, 00:08:26.047 "state": "configuring", 00:08:26.047 "raid_level": "raid1", 00:08:26.047 "superblock": true, 00:08:26.047 "num_base_bdevs": 2, 00:08:26.047 "num_base_bdevs_discovered": 1, 00:08:26.047 "num_base_bdevs_operational": 2, 00:08:26.047 "base_bdevs_list": [ 00:08:26.047 { 00:08:26.047 "name": "BaseBdev1", 00:08:26.047 "uuid": "7d4f9488-42cf-11ef-96ac-773515fba644", 00:08:26.047 "is_configured": true, 00:08:26.047 "data_offset": 2048, 00:08:26.047 "data_size": 63488 00:08:26.047 }, 00:08:26.047 { 00:08:26.047 "name": "BaseBdev2", 00:08:26.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:26.047 "is_configured": false, 00:08:26.047 "data_offset": 0, 00:08:26.047 "data_size": 0 00:08:26.047 } 00:08:26.047 ] 00:08:26.047 }' 00:08:26.047 17:27:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:26.047 17:27:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:26.611 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:26.611 [2024-07-15 17:27:22.424767] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:26.611 [2024-07-15 17:27:22.424859] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x303c3b634a00 00:08:26.611 [2024-07-15 17:27:22.424865] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:26.611 [2024-07-15 17:27:22.424900] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x303c3b697e20 00:08:26.611 [2024-07-15 17:27:22.424960] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x303c3b634a00 00:08:26.611 [2024-07-15 17:27:22.424965] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x303c3b634a00 00:08:26.611 [2024-07-15 17:27:22.424984] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.611 BaseBdev2 00:08:26.611 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:08:26.611 17:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:08:26.611 17:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:26.611 17:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:08:26.611 17:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:26.611 17:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:26.611 17:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:26.868 17:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:27.124 [ 00:08:27.124 { 00:08:27.124 "name": "BaseBdev2", 00:08:27.124 "aliases": [ 00:08:27.124 "7ec48471-42cf-11ef-96ac-773515fba644" 00:08:27.124 ], 00:08:27.124 "product_name": "Malloc disk", 00:08:27.124 "block_size": 512, 00:08:27.124 "num_blocks": 65536, 00:08:27.124 "uuid": "7ec48471-42cf-11ef-96ac-773515fba644", 00:08:27.124 "assigned_rate_limits": { 00:08:27.124 "rw_ios_per_sec": 0, 00:08:27.124 "rw_mbytes_per_sec": 0, 00:08:27.124 "r_mbytes_per_sec": 0, 00:08:27.124 "w_mbytes_per_sec": 0 00:08:27.124 }, 00:08:27.124 "claimed": true, 00:08:27.124 "claim_type": "exclusive_write", 00:08:27.124 "zoned": false, 00:08:27.124 "supported_io_types": { 00:08:27.124 "read": true, 00:08:27.124 "write": true, 00:08:27.124 "unmap": true, 00:08:27.124 "flush": true, 00:08:27.124 "reset": true, 00:08:27.124 "nvme_admin": false, 00:08:27.124 "nvme_io": false, 00:08:27.124 "nvme_io_md": false, 00:08:27.124 "write_zeroes": true, 00:08:27.124 "zcopy": true, 00:08:27.124 "get_zone_info": false, 00:08:27.124 "zone_management": false, 00:08:27.124 "zone_append": false, 00:08:27.124 "compare": false, 00:08:27.124 "compare_and_write": false, 00:08:27.124 "abort": true, 00:08:27.124 "seek_hole": false, 00:08:27.124 "seek_data": false, 00:08:27.124 "copy": true, 00:08:27.124 "nvme_iov_md": false 00:08:27.124 }, 00:08:27.124 "memory_domains": [ 00:08:27.124 { 00:08:27.124 "dma_device_id": "system", 00:08:27.124 "dma_device_type": 1 00:08:27.124 }, 00:08:27.124 { 00:08:27.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.124 "dma_device_type": 2 00:08:27.124 } 00:08:27.124 ], 00:08:27.124 "driver_specific": {} 00:08:27.124 } 00:08:27.124 ] 00:08:27.124 17:27:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:08:27.124 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:08:27.124 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:27.124 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:27.124 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:27.124 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:27.124 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:27.124 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:27.124 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:27.124 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:27.124 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:27.124 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:27.124 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:27.124 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:27.124 17:27:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:27.381 17:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:27.381 "name": "Existed_Raid", 00:08:27.381 "uuid": "7e42825e-42cf-11ef-96ac-773515fba644", 00:08:27.381 "strip_size_kb": 0, 00:08:27.381 "state": "online", 00:08:27.381 "raid_level": "raid1", 00:08:27.381 "superblock": true, 00:08:27.381 "num_base_bdevs": 2, 00:08:27.381 "num_base_bdevs_discovered": 2, 00:08:27.381 "num_base_bdevs_operational": 2, 00:08:27.381 "base_bdevs_list": [ 00:08:27.381 { 00:08:27.381 "name": "BaseBdev1", 00:08:27.381 "uuid": "7d4f9488-42cf-11ef-96ac-773515fba644", 00:08:27.381 "is_configured": true, 00:08:27.381 "data_offset": 2048, 00:08:27.381 "data_size": 63488 00:08:27.381 }, 00:08:27.381 { 00:08:27.381 "name": "BaseBdev2", 00:08:27.381 "uuid": "7ec48471-42cf-11ef-96ac-773515fba644", 00:08:27.381 "is_configured": true, 00:08:27.381 "data_offset": 2048, 00:08:27.381 "data_size": 63488 00:08:27.381 } 00:08:27.381 ] 00:08:27.381 }' 00:08:27.382 17:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:27.382 17:27:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:27.945 17:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:08:27.945 17:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:08:27.945 17:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:27.945 17:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:27.945 17:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:27.945 17:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:08:27.945 17:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:08:27.945 17:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:27.945 [2024-07-15 17:27:23.728706] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:27.945 17:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:27.945 "name": "Existed_Raid", 00:08:27.945 "aliases": [ 00:08:27.945 "7e42825e-42cf-11ef-96ac-773515fba644" 00:08:27.945 ], 00:08:27.945 "product_name": "Raid Volume", 00:08:27.945 "block_size": 512, 00:08:27.945 "num_blocks": 63488, 00:08:27.945 "uuid": "7e42825e-42cf-11ef-96ac-773515fba644", 00:08:27.945 "assigned_rate_limits": { 00:08:27.945 "rw_ios_per_sec": 0, 00:08:27.945 "rw_mbytes_per_sec": 0, 00:08:27.945 "r_mbytes_per_sec": 0, 00:08:27.945 "w_mbytes_per_sec": 0 00:08:27.945 }, 00:08:27.945 "claimed": false, 00:08:27.945 "zoned": false, 00:08:27.945 "supported_io_types": { 00:08:27.945 "read": true, 00:08:27.945 "write": true, 00:08:27.945 "unmap": false, 00:08:27.945 "flush": false, 00:08:27.945 "reset": true, 00:08:27.945 "nvme_admin": false, 00:08:27.945 "nvme_io": false, 00:08:27.945 "nvme_io_md": false, 00:08:27.945 "write_zeroes": true, 00:08:27.945 "zcopy": false, 00:08:27.945 "get_zone_info": false, 00:08:27.945 "zone_management": false, 00:08:27.945 "zone_append": false, 00:08:27.945 "compare": false, 00:08:27.945 "compare_and_write": false, 00:08:27.945 "abort": false, 00:08:27.945 "seek_hole": false, 00:08:27.945 "seek_data": false, 00:08:27.945 "copy": false, 00:08:27.945 "nvme_iov_md": false 00:08:27.945 }, 00:08:27.945 "memory_domains": [ 00:08:27.945 { 00:08:27.945 "dma_device_id": "system", 00:08:27.945 "dma_device_type": 1 00:08:27.945 }, 00:08:27.945 { 00:08:27.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.945 "dma_device_type": 2 00:08:27.945 }, 00:08:27.945 { 00:08:27.945 "dma_device_id": "system", 00:08:27.945 "dma_device_type": 1 00:08:27.945 }, 00:08:27.945 { 00:08:27.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.945 "dma_device_type": 2 00:08:27.945 } 00:08:27.945 ], 00:08:27.945 "driver_specific": { 00:08:27.945 "raid": { 00:08:27.945 "uuid": "7e42825e-42cf-11ef-96ac-773515fba644", 00:08:27.945 "strip_size_kb": 0, 00:08:27.945 "state": "online", 00:08:27.945 "raid_level": "raid1", 00:08:27.945 "superblock": true, 00:08:27.945 "num_base_bdevs": 2, 00:08:27.945 "num_base_bdevs_discovered": 2, 00:08:27.945 "num_base_bdevs_operational": 2, 00:08:27.945 "base_bdevs_list": [ 00:08:27.945 { 00:08:27.945 "name": "BaseBdev1", 00:08:27.945 "uuid": "7d4f9488-42cf-11ef-96ac-773515fba644", 00:08:27.945 "is_configured": true, 00:08:27.945 "data_offset": 2048, 00:08:27.945 "data_size": 63488 00:08:27.945 }, 00:08:27.945 { 00:08:27.945 "name": "BaseBdev2", 00:08:27.945 "uuid": "7ec48471-42cf-11ef-96ac-773515fba644", 00:08:27.945 "is_configured": true, 00:08:27.945 "data_offset": 2048, 00:08:27.945 "data_size": 63488 00:08:27.945 } 00:08:27.945 ] 00:08:27.945 } 00:08:27.945 } 00:08:27.945 }' 00:08:27.946 17:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:27.946 17:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:08:27.946 BaseBdev2' 00:08:27.946 17:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:27.946 17:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:08:27.946 17:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:28.203 17:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:28.203 "name": "BaseBdev1", 00:08:28.203 "aliases": [ 00:08:28.203 "7d4f9488-42cf-11ef-96ac-773515fba644" 00:08:28.203 ], 00:08:28.203 "product_name": "Malloc disk", 00:08:28.203 "block_size": 512, 00:08:28.203 "num_blocks": 65536, 00:08:28.203 "uuid": "7d4f9488-42cf-11ef-96ac-773515fba644", 00:08:28.203 "assigned_rate_limits": { 00:08:28.203 "rw_ios_per_sec": 0, 00:08:28.203 "rw_mbytes_per_sec": 0, 00:08:28.203 "r_mbytes_per_sec": 0, 00:08:28.203 "w_mbytes_per_sec": 0 00:08:28.203 }, 00:08:28.203 "claimed": true, 00:08:28.203 "claim_type": "exclusive_write", 00:08:28.203 "zoned": false, 00:08:28.203 "supported_io_types": { 00:08:28.203 "read": true, 00:08:28.203 "write": true, 00:08:28.203 "unmap": true, 00:08:28.203 "flush": true, 00:08:28.203 "reset": true, 00:08:28.203 "nvme_admin": false, 00:08:28.203 "nvme_io": false, 00:08:28.203 "nvme_io_md": false, 00:08:28.203 "write_zeroes": true, 00:08:28.203 "zcopy": true, 00:08:28.203 "get_zone_info": false, 00:08:28.203 "zone_management": false, 00:08:28.203 "zone_append": false, 00:08:28.203 "compare": false, 00:08:28.203 "compare_and_write": false, 00:08:28.203 "abort": true, 00:08:28.203 "seek_hole": false, 00:08:28.203 "seek_data": false, 00:08:28.203 "copy": true, 00:08:28.203 "nvme_iov_md": false 00:08:28.203 }, 00:08:28.203 "memory_domains": [ 00:08:28.203 { 00:08:28.203 "dma_device_id": "system", 00:08:28.203 "dma_device_type": 1 00:08:28.203 }, 00:08:28.203 { 00:08:28.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.203 "dma_device_type": 2 00:08:28.203 } 00:08:28.203 ], 00:08:28.203 "driver_specific": {} 00:08:28.203 }' 00:08:28.203 17:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:28.203 17:27:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:28.203 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:28.203 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:28.203 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:28.203 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:28.203 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:28.203 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:28.460 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:28.460 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:28.460 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:28.460 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:28.460 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:28.460 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:08:28.460 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:28.750 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:28.750 "name": "BaseBdev2", 00:08:28.750 "aliases": [ 00:08:28.750 "7ec48471-42cf-11ef-96ac-773515fba644" 00:08:28.750 ], 00:08:28.750 "product_name": "Malloc disk", 00:08:28.750 "block_size": 512, 00:08:28.750 "num_blocks": 65536, 00:08:28.750 "uuid": "7ec48471-42cf-11ef-96ac-773515fba644", 00:08:28.750 "assigned_rate_limits": { 00:08:28.750 "rw_ios_per_sec": 0, 00:08:28.750 "rw_mbytes_per_sec": 0, 00:08:28.750 "r_mbytes_per_sec": 0, 00:08:28.750 "w_mbytes_per_sec": 0 00:08:28.750 }, 00:08:28.750 "claimed": true, 00:08:28.750 "claim_type": "exclusive_write", 00:08:28.750 "zoned": false, 00:08:28.750 "supported_io_types": { 00:08:28.750 "read": true, 00:08:28.750 "write": true, 00:08:28.750 "unmap": true, 00:08:28.750 "flush": true, 00:08:28.750 "reset": true, 00:08:28.750 "nvme_admin": false, 00:08:28.750 "nvme_io": false, 00:08:28.750 "nvme_io_md": false, 00:08:28.750 "write_zeroes": true, 00:08:28.750 "zcopy": true, 00:08:28.750 "get_zone_info": false, 00:08:28.750 "zone_management": false, 00:08:28.750 "zone_append": false, 00:08:28.750 "compare": false, 00:08:28.750 "compare_and_write": false, 00:08:28.750 "abort": true, 00:08:28.750 "seek_hole": false, 00:08:28.750 "seek_data": false, 00:08:28.750 "copy": true, 00:08:28.750 "nvme_iov_md": false 00:08:28.750 }, 00:08:28.750 "memory_domains": [ 00:08:28.750 { 00:08:28.750 "dma_device_id": "system", 00:08:28.750 "dma_device_type": 1 00:08:28.750 }, 00:08:28.750 { 00:08:28.751 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.751 "dma_device_type": 2 00:08:28.751 } 00:08:28.751 ], 00:08:28.751 "driver_specific": {} 00:08:28.751 }' 00:08:28.751 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:28.751 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:28.751 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:28.751 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:28.751 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:28.751 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:28.751 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:28.751 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:28.751 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:28.751 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:28.751 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:28.751 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:28.751 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:29.008 [2024-07-15 17:27:24.672693] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:29.008 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:08:29.008 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:08:29.008 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:29.008 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:08:29.008 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:08:29.008 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:29.008 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:29.008 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:29.008 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:29.008 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:29.008 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:29.008 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:29.008 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:29.008 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:29.008 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:29.008 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:29.008 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.266 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:29.266 "name": "Existed_Raid", 00:08:29.266 "uuid": "7e42825e-42cf-11ef-96ac-773515fba644", 00:08:29.266 "strip_size_kb": 0, 00:08:29.266 "state": "online", 00:08:29.266 "raid_level": "raid1", 00:08:29.266 "superblock": true, 00:08:29.266 "num_base_bdevs": 2, 00:08:29.266 "num_base_bdevs_discovered": 1, 00:08:29.266 "num_base_bdevs_operational": 1, 00:08:29.266 "base_bdevs_list": [ 00:08:29.266 { 00:08:29.266 "name": null, 00:08:29.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.266 "is_configured": false, 00:08:29.266 "data_offset": 2048, 00:08:29.266 "data_size": 63488 00:08:29.266 }, 00:08:29.266 { 00:08:29.266 "name": "BaseBdev2", 00:08:29.266 "uuid": "7ec48471-42cf-11ef-96ac-773515fba644", 00:08:29.266 "is_configured": true, 00:08:29.266 "data_offset": 2048, 00:08:29.266 "data_size": 63488 00:08:29.266 } 00:08:29.266 ] 00:08:29.266 }' 00:08:29.266 17:27:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:29.266 17:27:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:29.522 17:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:08:29.522 17:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:29.522 17:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:29.522 17:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:08:30.089 17:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:08:30.089 17:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:30.089 17:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:30.089 [2024-07-15 17:27:25.854753] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:30.089 [2024-07-15 17:27:25.854798] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:30.089 [2024-07-15 17:27:25.861260] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:30.089 [2024-07-15 17:27:25.861275] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:30.089 [2024-07-15 17:27:25.861294] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x303c3b634a00 name Existed_Raid, state offline 00:08:30.089 17:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:08:30.089 17:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:30.089 17:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:30.089 17:27:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:08:30.348 17:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:08:30.348 17:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:08:30.348 17:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:08:30.348 17:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 51035 00:08:30.348 17:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 51035 ']' 00:08:30.348 17:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 51035 00:08:30.348 17:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:08:30.348 17:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:08:30.348 17:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 51035 00:08:30.348 17:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:08:30.348 17:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:08:30.348 killing process with pid 51035 00:08:30.348 17:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:08:30.348 17:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51035' 00:08:30.348 17:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 51035 00:08:30.348 [2024-07-15 17:27:26.121274] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:30.348 [2024-07-15 17:27:26.121308] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:30.348 17:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 51035 00:08:30.606 17:27:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:08:30.606 ************************************ 00:08:30.607 END TEST raid_state_function_test_sb 00:08:30.607 00:08:30.607 real 0m8.810s 00:08:30.607 user 0m15.254s 00:08:30.607 sys 0m1.619s 00:08:30.607 17:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:30.607 17:27:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:30.607 ************************************ 00:08:30.607 17:27:26 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:08:30.607 17:27:26 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:30.607 17:27:26 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:30.607 17:27:26 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.607 17:27:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:30.607 ************************************ 00:08:30.607 START TEST raid_superblock_test 00:08:30.607 ************************************ 00:08:30.607 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:08:30.607 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:08:30.607 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:08:30.607 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:08:30.607 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:08:30.607 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:08:30.607 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:08:30.607 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:08:30.607 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:08:30.607 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:08:30.607 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:08:30.607 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:08:30.607 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:08:30.607 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:08:30.607 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:08:30.607 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:08:30.607 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=51309 00:08:30.607 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 51309 /var/tmp/spdk-raid.sock 00:08:30.607 17:27:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:08:30.607 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 51309 ']' 00:08:30.607 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:30.607 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:30.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:30.607 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:30.607 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:30.607 17:27:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.607 [2024-07-15 17:27:26.364739] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:08:30.607 [2024-07-15 17:27:26.365042] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:31.173 EAL: TSC is not safe to use in SMP mode 00:08:31.173 EAL: TSC is not invariant 00:08:31.173 [2024-07-15 17:27:26.930882] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.431 [2024-07-15 17:27:27.025018] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:31.431 [2024-07-15 17:27:27.027300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.431 [2024-07-15 17:27:27.028097] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.431 [2024-07-15 17:27:27.028111] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.689 17:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:31.689 17:27:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:08:31.689 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:08:31.689 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:08:31.689 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:08:31.689 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:08:31.689 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:31.689 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:31.689 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:08:31.689 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:31.689 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:08:31.946 malloc1 00:08:31.946 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:32.206 [2024-07-15 17:27:27.933323] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:32.206 [2024-07-15 17:27:27.933437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.206 [2024-07-15 17:27:27.933450] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18f90ba34780 00:08:32.206 [2024-07-15 17:27:27.933458] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.206 [2024-07-15 17:27:27.934385] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.206 [2024-07-15 17:27:27.934413] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:32.206 pt1 00:08:32.206 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:08:32.206 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:08:32.206 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:08:32.206 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:08:32.206 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:32.206 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:32.206 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:08:32.206 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:32.206 17:27:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:08:32.468 malloc2 00:08:32.468 17:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:32.728 [2024-07-15 17:27:28.461331] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:32.728 [2024-07-15 17:27:28.461410] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:32.728 [2024-07-15 17:27:28.461423] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18f90ba34c80 00:08:32.728 [2024-07-15 17:27:28.461430] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:32.728 [2024-07-15 17:27:28.462182] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:32.728 [2024-07-15 17:27:28.462214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:32.728 pt2 00:08:32.728 17:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:08:32.728 17:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:08:32.728 17:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:08:32.986 [2024-07-15 17:27:28.697346] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:32.986 [2024-07-15 17:27:28.697971] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:32.986 [2024-07-15 17:27:28.698047] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x18f90ba34f00 00:08:32.986 [2024-07-15 17:27:28.698053] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:32.986 [2024-07-15 17:27:28.698107] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18f90ba97e20 00:08:32.986 [2024-07-15 17:27:28.698181] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x18f90ba34f00 00:08:32.986 [2024-07-15 17:27:28.698185] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x18f90ba34f00 00:08:32.986 [2024-07-15 17:27:28.698213] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:32.986 17:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:32.986 17:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:32.986 17:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:32.986 17:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:32.986 17:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:32.986 17:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:32.986 17:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:32.986 17:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:32.986 17:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:32.986 17:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:32.986 17:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:32.986 17:27:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:33.244 17:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:33.244 "name": "raid_bdev1", 00:08:33.244 "uuid": "8281a7bf-42cf-11ef-96ac-773515fba644", 00:08:33.244 "strip_size_kb": 0, 00:08:33.244 "state": "online", 00:08:33.244 "raid_level": "raid1", 00:08:33.244 "superblock": true, 00:08:33.244 "num_base_bdevs": 2, 00:08:33.244 "num_base_bdevs_discovered": 2, 00:08:33.244 "num_base_bdevs_operational": 2, 00:08:33.244 "base_bdevs_list": [ 00:08:33.244 { 00:08:33.244 "name": "pt1", 00:08:33.244 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:33.244 "is_configured": true, 00:08:33.244 "data_offset": 2048, 00:08:33.244 "data_size": 63488 00:08:33.244 }, 00:08:33.244 { 00:08:33.244 "name": "pt2", 00:08:33.244 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:33.244 "is_configured": true, 00:08:33.244 "data_offset": 2048, 00:08:33.244 "data_size": 63488 00:08:33.244 } 00:08:33.244 ] 00:08:33.244 }' 00:08:33.244 17:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:33.244 17:27:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.815 17:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:08:33.815 17:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:08:33.815 17:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:33.815 17:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:33.815 17:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:33.815 17:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:33.815 17:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:33.815 17:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:34.075 [2024-07-15 17:27:29.657445] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:34.075 17:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:34.075 "name": "raid_bdev1", 00:08:34.075 "aliases": [ 00:08:34.075 "8281a7bf-42cf-11ef-96ac-773515fba644" 00:08:34.075 ], 00:08:34.075 "product_name": "Raid Volume", 00:08:34.075 "block_size": 512, 00:08:34.075 "num_blocks": 63488, 00:08:34.075 "uuid": "8281a7bf-42cf-11ef-96ac-773515fba644", 00:08:34.075 "assigned_rate_limits": { 00:08:34.075 "rw_ios_per_sec": 0, 00:08:34.075 "rw_mbytes_per_sec": 0, 00:08:34.075 "r_mbytes_per_sec": 0, 00:08:34.075 "w_mbytes_per_sec": 0 00:08:34.075 }, 00:08:34.075 "claimed": false, 00:08:34.075 "zoned": false, 00:08:34.075 "supported_io_types": { 00:08:34.075 "read": true, 00:08:34.075 "write": true, 00:08:34.075 "unmap": false, 00:08:34.075 "flush": false, 00:08:34.075 "reset": true, 00:08:34.075 "nvme_admin": false, 00:08:34.075 "nvme_io": false, 00:08:34.075 "nvme_io_md": false, 00:08:34.075 "write_zeroes": true, 00:08:34.075 "zcopy": false, 00:08:34.075 "get_zone_info": false, 00:08:34.075 "zone_management": false, 00:08:34.075 "zone_append": false, 00:08:34.075 "compare": false, 00:08:34.075 "compare_and_write": false, 00:08:34.075 "abort": false, 00:08:34.075 "seek_hole": false, 00:08:34.075 "seek_data": false, 00:08:34.075 "copy": false, 00:08:34.075 "nvme_iov_md": false 00:08:34.075 }, 00:08:34.075 "memory_domains": [ 00:08:34.075 { 00:08:34.075 "dma_device_id": "system", 00:08:34.075 "dma_device_type": 1 00:08:34.075 }, 00:08:34.075 { 00:08:34.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.075 "dma_device_type": 2 00:08:34.075 }, 00:08:34.075 { 00:08:34.075 "dma_device_id": "system", 00:08:34.075 "dma_device_type": 1 00:08:34.075 }, 00:08:34.075 { 00:08:34.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.075 "dma_device_type": 2 00:08:34.075 } 00:08:34.075 ], 00:08:34.075 "driver_specific": { 00:08:34.075 "raid": { 00:08:34.075 "uuid": "8281a7bf-42cf-11ef-96ac-773515fba644", 00:08:34.075 "strip_size_kb": 0, 00:08:34.075 "state": "online", 00:08:34.075 "raid_level": "raid1", 00:08:34.075 "superblock": true, 00:08:34.075 "num_base_bdevs": 2, 00:08:34.075 "num_base_bdevs_discovered": 2, 00:08:34.075 "num_base_bdevs_operational": 2, 00:08:34.075 "base_bdevs_list": [ 00:08:34.075 { 00:08:34.075 "name": "pt1", 00:08:34.075 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:34.075 "is_configured": true, 00:08:34.075 "data_offset": 2048, 00:08:34.075 "data_size": 63488 00:08:34.075 }, 00:08:34.075 { 00:08:34.075 "name": "pt2", 00:08:34.075 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:34.075 "is_configured": true, 00:08:34.075 "data_offset": 2048, 00:08:34.075 "data_size": 63488 00:08:34.075 } 00:08:34.075 ] 00:08:34.075 } 00:08:34.075 } 00:08:34.075 }' 00:08:34.075 17:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:34.075 17:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:08:34.075 pt2' 00:08:34.075 17:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:34.075 17:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:34.075 17:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:08:34.333 17:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:34.333 "name": "pt1", 00:08:34.333 "aliases": [ 00:08:34.333 "00000000-0000-0000-0000-000000000001" 00:08:34.333 ], 00:08:34.333 "product_name": "passthru", 00:08:34.333 "block_size": 512, 00:08:34.333 "num_blocks": 65536, 00:08:34.333 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:34.333 "assigned_rate_limits": { 00:08:34.333 "rw_ios_per_sec": 0, 00:08:34.333 "rw_mbytes_per_sec": 0, 00:08:34.333 "r_mbytes_per_sec": 0, 00:08:34.333 "w_mbytes_per_sec": 0 00:08:34.333 }, 00:08:34.333 "claimed": true, 00:08:34.334 "claim_type": "exclusive_write", 00:08:34.334 "zoned": false, 00:08:34.334 "supported_io_types": { 00:08:34.334 "read": true, 00:08:34.334 "write": true, 00:08:34.334 "unmap": true, 00:08:34.334 "flush": true, 00:08:34.334 "reset": true, 00:08:34.334 "nvme_admin": false, 00:08:34.334 "nvme_io": false, 00:08:34.334 "nvme_io_md": false, 00:08:34.334 "write_zeroes": true, 00:08:34.334 "zcopy": true, 00:08:34.334 "get_zone_info": false, 00:08:34.334 "zone_management": false, 00:08:34.334 "zone_append": false, 00:08:34.334 "compare": false, 00:08:34.334 "compare_and_write": false, 00:08:34.334 "abort": true, 00:08:34.334 "seek_hole": false, 00:08:34.334 "seek_data": false, 00:08:34.334 "copy": true, 00:08:34.334 "nvme_iov_md": false 00:08:34.334 }, 00:08:34.334 "memory_domains": [ 00:08:34.334 { 00:08:34.334 "dma_device_id": "system", 00:08:34.334 "dma_device_type": 1 00:08:34.334 }, 00:08:34.334 { 00:08:34.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.334 "dma_device_type": 2 00:08:34.334 } 00:08:34.334 ], 00:08:34.334 "driver_specific": { 00:08:34.334 "passthru": { 00:08:34.334 "name": "pt1", 00:08:34.334 "base_bdev_name": "malloc1" 00:08:34.334 } 00:08:34.334 } 00:08:34.334 }' 00:08:34.334 17:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:34.334 17:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:34.334 17:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:34.334 17:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:34.334 17:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:34.334 17:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:34.334 17:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:34.334 17:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:34.334 17:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:34.334 17:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:34.334 17:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:34.334 17:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:34.334 17:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:34.334 17:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:08:34.334 17:27:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:34.592 17:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:34.592 "name": "pt2", 00:08:34.592 "aliases": [ 00:08:34.592 "00000000-0000-0000-0000-000000000002" 00:08:34.592 ], 00:08:34.592 "product_name": "passthru", 00:08:34.592 "block_size": 512, 00:08:34.592 "num_blocks": 65536, 00:08:34.592 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:34.592 "assigned_rate_limits": { 00:08:34.592 "rw_ios_per_sec": 0, 00:08:34.592 "rw_mbytes_per_sec": 0, 00:08:34.592 "r_mbytes_per_sec": 0, 00:08:34.592 "w_mbytes_per_sec": 0 00:08:34.592 }, 00:08:34.592 "claimed": true, 00:08:34.592 "claim_type": "exclusive_write", 00:08:34.592 "zoned": false, 00:08:34.592 "supported_io_types": { 00:08:34.592 "read": true, 00:08:34.592 "write": true, 00:08:34.592 "unmap": true, 00:08:34.592 "flush": true, 00:08:34.592 "reset": true, 00:08:34.592 "nvme_admin": false, 00:08:34.592 "nvme_io": false, 00:08:34.592 "nvme_io_md": false, 00:08:34.592 "write_zeroes": true, 00:08:34.592 "zcopy": true, 00:08:34.592 "get_zone_info": false, 00:08:34.592 "zone_management": false, 00:08:34.592 "zone_append": false, 00:08:34.592 "compare": false, 00:08:34.592 "compare_and_write": false, 00:08:34.592 "abort": true, 00:08:34.592 "seek_hole": false, 00:08:34.592 "seek_data": false, 00:08:34.592 "copy": true, 00:08:34.592 "nvme_iov_md": false 00:08:34.592 }, 00:08:34.592 "memory_domains": [ 00:08:34.592 { 00:08:34.592 "dma_device_id": "system", 00:08:34.592 "dma_device_type": 1 00:08:34.592 }, 00:08:34.592 { 00:08:34.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.592 "dma_device_type": 2 00:08:34.592 } 00:08:34.592 ], 00:08:34.592 "driver_specific": { 00:08:34.592 "passthru": { 00:08:34.592 "name": "pt2", 00:08:34.592 "base_bdev_name": "malloc2" 00:08:34.592 } 00:08:34.592 } 00:08:34.592 }' 00:08:34.592 17:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:34.592 17:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:34.593 17:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:34.593 17:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:34.593 17:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:34.593 17:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:34.593 17:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:34.593 17:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:34.593 17:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:34.593 17:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:34.593 17:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:34.593 17:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:34.593 17:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:08:34.593 17:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:34.851 [2024-07-15 17:27:30.577612] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:34.851 17:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=8281a7bf-42cf-11ef-96ac-773515fba644 00:08:34.851 17:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 8281a7bf-42cf-11ef-96ac-773515fba644 ']' 00:08:34.851 17:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:35.109 [2024-07-15 17:27:30.857657] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:35.109 [2024-07-15 17:27:30.857681] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:35.109 [2024-07-15 17:27:30.857704] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:35.109 [2024-07-15 17:27:30.857719] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:35.109 [2024-07-15 17:27:30.857723] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18f90ba34f00 name raid_bdev1, state offline 00:08:35.109 17:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:35.109 17:27:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:08:35.367 17:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:08:35.367 17:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:08:35.367 17:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:08:35.367 17:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:08:35.627 17:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:08:35.627 17:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:35.886 17:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:08:35.886 17:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:36.145 17:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:08:36.145 17:27:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:08:36.145 17:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:08:36.145 17:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:08:36.145 17:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:36.145 17:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:36.145 17:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:36.145 17:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:36.145 17:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:36.145 17:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:36.145 17:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:36.145 17:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:36.145 17:27:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:08:36.403 [2024-07-15 17:27:32.225692] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:36.403 [2024-07-15 17:27:32.226404] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:36.403 [2024-07-15 17:27:32.226429] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:36.403 [2024-07-15 17:27:32.226468] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:36.403 [2024-07-15 17:27:32.226479] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:36.403 [2024-07-15 17:27:32.226484] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18f90ba34c80 name raid_bdev1, state configuring 00:08:36.403 request: 00:08:36.403 { 00:08:36.403 "name": "raid_bdev1", 00:08:36.403 "raid_level": "raid1", 00:08:36.403 "base_bdevs": [ 00:08:36.403 "malloc1", 00:08:36.403 "malloc2" 00:08:36.403 ], 00:08:36.403 "superblock": false, 00:08:36.403 "method": "bdev_raid_create", 00:08:36.403 "req_id": 1 00:08:36.403 } 00:08:36.403 Got JSON-RPC error response 00:08:36.403 response: 00:08:36.403 { 00:08:36.403 "code": -17, 00:08:36.403 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:36.403 } 00:08:36.661 17:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:08:36.661 17:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:36.661 17:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:36.661 17:27:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:36.661 17:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:36.661 17:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:08:36.919 17:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:08:36.919 17:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:08:36.919 17:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:37.181 [2024-07-15 17:27:32.861693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:37.181 [2024-07-15 17:27:32.861776] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:37.181 [2024-07-15 17:27:32.861805] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18f90ba34780 00:08:37.181 [2024-07-15 17:27:32.861813] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:37.181 [2024-07-15 17:27:32.862493] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:37.181 [2024-07-15 17:27:32.862513] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:37.181 [2024-07-15 17:27:32.862538] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:37.181 [2024-07-15 17:27:32.862549] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:37.181 pt1 00:08:37.181 17:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:37.181 17:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:37.181 17:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:37.181 17:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:37.181 17:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:37.181 17:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:37.181 17:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:37.181 17:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:37.181 17:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:37.181 17:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:37.181 17:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:37.181 17:27:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:37.440 17:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:37.440 "name": "raid_bdev1", 00:08:37.440 "uuid": "8281a7bf-42cf-11ef-96ac-773515fba644", 00:08:37.440 "strip_size_kb": 0, 00:08:37.440 "state": "configuring", 00:08:37.440 "raid_level": "raid1", 00:08:37.440 "superblock": true, 00:08:37.440 "num_base_bdevs": 2, 00:08:37.440 "num_base_bdevs_discovered": 1, 00:08:37.440 "num_base_bdevs_operational": 2, 00:08:37.440 "base_bdevs_list": [ 00:08:37.440 { 00:08:37.440 "name": "pt1", 00:08:37.440 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:37.440 "is_configured": true, 00:08:37.440 "data_offset": 2048, 00:08:37.440 "data_size": 63488 00:08:37.440 }, 00:08:37.440 { 00:08:37.440 "name": null, 00:08:37.440 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:37.440 "is_configured": false, 00:08:37.440 "data_offset": 2048, 00:08:37.440 "data_size": 63488 00:08:37.440 } 00:08:37.440 ] 00:08:37.440 }' 00:08:37.440 17:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:37.440 17:27:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.698 17:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:08:37.698 17:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:08:37.698 17:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:08:37.698 17:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:37.956 [2024-07-15 17:27:33.713788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:37.956 [2024-07-15 17:27:33.713862] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:37.956 [2024-07-15 17:27:33.713891] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18f90ba34f00 00:08:37.956 [2024-07-15 17:27:33.713899] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:37.956 [2024-07-15 17:27:33.714011] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:37.956 [2024-07-15 17:27:33.714023] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:37.956 [2024-07-15 17:27:33.714061] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:37.956 [2024-07-15 17:27:33.714070] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:37.956 [2024-07-15 17:27:33.714098] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x18f90ba35180 00:08:37.956 [2024-07-15 17:27:33.714102] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:37.956 [2024-07-15 17:27:33.714122] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18f90ba97e20 00:08:37.956 [2024-07-15 17:27:33.714217] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x18f90ba35180 00:08:37.956 [2024-07-15 17:27:33.714222] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x18f90ba35180 00:08:37.956 [2024-07-15 17:27:33.714245] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:37.956 pt2 00:08:37.956 17:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:08:37.956 17:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:08:37.956 17:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:37.956 17:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:37.956 17:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:37.956 17:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:37.956 17:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:37.956 17:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:37.956 17:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:37.956 17:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:37.956 17:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:37.956 17:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:37.956 17:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:37.956 17:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:38.215 17:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:38.215 "name": "raid_bdev1", 00:08:38.215 "uuid": "8281a7bf-42cf-11ef-96ac-773515fba644", 00:08:38.215 "strip_size_kb": 0, 00:08:38.215 "state": "online", 00:08:38.215 "raid_level": "raid1", 00:08:38.215 "superblock": true, 00:08:38.215 "num_base_bdevs": 2, 00:08:38.215 "num_base_bdevs_discovered": 2, 00:08:38.215 "num_base_bdevs_operational": 2, 00:08:38.215 "base_bdevs_list": [ 00:08:38.215 { 00:08:38.215 "name": "pt1", 00:08:38.215 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:38.215 "is_configured": true, 00:08:38.215 "data_offset": 2048, 00:08:38.215 "data_size": 63488 00:08:38.215 }, 00:08:38.215 { 00:08:38.215 "name": "pt2", 00:08:38.215 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:38.215 "is_configured": true, 00:08:38.215 "data_offset": 2048, 00:08:38.215 "data_size": 63488 00:08:38.215 } 00:08:38.215 ] 00:08:38.215 }' 00:08:38.215 17:27:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:38.215 17:27:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.474 17:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:08:38.474 17:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:08:38.474 17:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:38.474 17:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:38.474 17:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:38.474 17:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:38.474 17:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:38.474 17:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:38.732 [2024-07-15 17:27:34.469853] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:38.732 17:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:38.732 "name": "raid_bdev1", 00:08:38.732 "aliases": [ 00:08:38.732 "8281a7bf-42cf-11ef-96ac-773515fba644" 00:08:38.732 ], 00:08:38.732 "product_name": "Raid Volume", 00:08:38.732 "block_size": 512, 00:08:38.732 "num_blocks": 63488, 00:08:38.732 "uuid": "8281a7bf-42cf-11ef-96ac-773515fba644", 00:08:38.732 "assigned_rate_limits": { 00:08:38.732 "rw_ios_per_sec": 0, 00:08:38.732 "rw_mbytes_per_sec": 0, 00:08:38.732 "r_mbytes_per_sec": 0, 00:08:38.732 "w_mbytes_per_sec": 0 00:08:38.732 }, 00:08:38.732 "claimed": false, 00:08:38.732 "zoned": false, 00:08:38.732 "supported_io_types": { 00:08:38.732 "read": true, 00:08:38.732 "write": true, 00:08:38.732 "unmap": false, 00:08:38.732 "flush": false, 00:08:38.732 "reset": true, 00:08:38.732 "nvme_admin": false, 00:08:38.732 "nvme_io": false, 00:08:38.732 "nvme_io_md": false, 00:08:38.732 "write_zeroes": true, 00:08:38.732 "zcopy": false, 00:08:38.732 "get_zone_info": false, 00:08:38.732 "zone_management": false, 00:08:38.732 "zone_append": false, 00:08:38.732 "compare": false, 00:08:38.732 "compare_and_write": false, 00:08:38.732 "abort": false, 00:08:38.732 "seek_hole": false, 00:08:38.732 "seek_data": false, 00:08:38.732 "copy": false, 00:08:38.732 "nvme_iov_md": false 00:08:38.732 }, 00:08:38.732 "memory_domains": [ 00:08:38.732 { 00:08:38.732 "dma_device_id": "system", 00:08:38.732 "dma_device_type": 1 00:08:38.732 }, 00:08:38.732 { 00:08:38.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.732 "dma_device_type": 2 00:08:38.732 }, 00:08:38.732 { 00:08:38.732 "dma_device_id": "system", 00:08:38.732 "dma_device_type": 1 00:08:38.732 }, 00:08:38.732 { 00:08:38.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.732 "dma_device_type": 2 00:08:38.732 } 00:08:38.732 ], 00:08:38.732 "driver_specific": { 00:08:38.732 "raid": { 00:08:38.732 "uuid": "8281a7bf-42cf-11ef-96ac-773515fba644", 00:08:38.732 "strip_size_kb": 0, 00:08:38.732 "state": "online", 00:08:38.732 "raid_level": "raid1", 00:08:38.732 "superblock": true, 00:08:38.732 "num_base_bdevs": 2, 00:08:38.732 "num_base_bdevs_discovered": 2, 00:08:38.732 "num_base_bdevs_operational": 2, 00:08:38.732 "base_bdevs_list": [ 00:08:38.732 { 00:08:38.732 "name": "pt1", 00:08:38.732 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:38.732 "is_configured": true, 00:08:38.732 "data_offset": 2048, 00:08:38.732 "data_size": 63488 00:08:38.732 }, 00:08:38.732 { 00:08:38.732 "name": "pt2", 00:08:38.732 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:38.732 "is_configured": true, 00:08:38.732 "data_offset": 2048, 00:08:38.732 "data_size": 63488 00:08:38.732 } 00:08:38.732 ] 00:08:38.732 } 00:08:38.732 } 00:08:38.732 }' 00:08:38.732 17:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:38.732 17:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:08:38.732 pt2' 00:08:38.732 17:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:38.732 17:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:08:38.732 17:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:38.991 17:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:38.991 "name": "pt1", 00:08:38.991 "aliases": [ 00:08:38.991 "00000000-0000-0000-0000-000000000001" 00:08:38.991 ], 00:08:38.991 "product_name": "passthru", 00:08:38.991 "block_size": 512, 00:08:38.991 "num_blocks": 65536, 00:08:38.991 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:38.991 "assigned_rate_limits": { 00:08:38.991 "rw_ios_per_sec": 0, 00:08:38.991 "rw_mbytes_per_sec": 0, 00:08:38.991 "r_mbytes_per_sec": 0, 00:08:38.991 "w_mbytes_per_sec": 0 00:08:38.991 }, 00:08:38.991 "claimed": true, 00:08:38.991 "claim_type": "exclusive_write", 00:08:38.991 "zoned": false, 00:08:38.991 "supported_io_types": { 00:08:38.991 "read": true, 00:08:38.991 "write": true, 00:08:38.991 "unmap": true, 00:08:38.991 "flush": true, 00:08:38.991 "reset": true, 00:08:38.991 "nvme_admin": false, 00:08:38.991 "nvme_io": false, 00:08:38.991 "nvme_io_md": false, 00:08:38.991 "write_zeroes": true, 00:08:38.991 "zcopy": true, 00:08:38.991 "get_zone_info": false, 00:08:38.991 "zone_management": false, 00:08:38.991 "zone_append": false, 00:08:38.991 "compare": false, 00:08:38.991 "compare_and_write": false, 00:08:38.991 "abort": true, 00:08:38.991 "seek_hole": false, 00:08:38.991 "seek_data": false, 00:08:38.991 "copy": true, 00:08:38.991 "nvme_iov_md": false 00:08:38.991 }, 00:08:38.991 "memory_domains": [ 00:08:38.991 { 00:08:38.991 "dma_device_id": "system", 00:08:38.991 "dma_device_type": 1 00:08:38.991 }, 00:08:38.991 { 00:08:38.991 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.991 "dma_device_type": 2 00:08:38.991 } 00:08:38.991 ], 00:08:38.991 "driver_specific": { 00:08:38.991 "passthru": { 00:08:38.991 "name": "pt1", 00:08:38.991 "base_bdev_name": "malloc1" 00:08:38.991 } 00:08:38.991 } 00:08:38.991 }' 00:08:38.991 17:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:38.991 17:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:38.991 17:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:38.991 17:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:38.991 17:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:38.991 17:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:38.991 17:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:38.991 17:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:38.991 17:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:39.249 17:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:39.249 17:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:39.249 17:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:39.249 17:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:39.249 17:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:08:39.249 17:27:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:39.249 17:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:39.249 "name": "pt2", 00:08:39.249 "aliases": [ 00:08:39.249 "00000000-0000-0000-0000-000000000002" 00:08:39.249 ], 00:08:39.249 "product_name": "passthru", 00:08:39.249 "block_size": 512, 00:08:39.249 "num_blocks": 65536, 00:08:39.249 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:39.249 "assigned_rate_limits": { 00:08:39.249 "rw_ios_per_sec": 0, 00:08:39.249 "rw_mbytes_per_sec": 0, 00:08:39.249 "r_mbytes_per_sec": 0, 00:08:39.249 "w_mbytes_per_sec": 0 00:08:39.249 }, 00:08:39.249 "claimed": true, 00:08:39.249 "claim_type": "exclusive_write", 00:08:39.249 "zoned": false, 00:08:39.250 "supported_io_types": { 00:08:39.250 "read": true, 00:08:39.250 "write": true, 00:08:39.250 "unmap": true, 00:08:39.250 "flush": true, 00:08:39.250 "reset": true, 00:08:39.250 "nvme_admin": false, 00:08:39.250 "nvme_io": false, 00:08:39.250 "nvme_io_md": false, 00:08:39.250 "write_zeroes": true, 00:08:39.250 "zcopy": true, 00:08:39.250 "get_zone_info": false, 00:08:39.250 "zone_management": false, 00:08:39.250 "zone_append": false, 00:08:39.250 "compare": false, 00:08:39.250 "compare_and_write": false, 00:08:39.250 "abort": true, 00:08:39.250 "seek_hole": false, 00:08:39.250 "seek_data": false, 00:08:39.250 "copy": true, 00:08:39.250 "nvme_iov_md": false 00:08:39.250 }, 00:08:39.250 "memory_domains": [ 00:08:39.250 { 00:08:39.250 "dma_device_id": "system", 00:08:39.250 "dma_device_type": 1 00:08:39.250 }, 00:08:39.250 { 00:08:39.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:39.250 "dma_device_type": 2 00:08:39.250 } 00:08:39.250 ], 00:08:39.250 "driver_specific": { 00:08:39.250 "passthru": { 00:08:39.250 "name": "pt2", 00:08:39.250 "base_bdev_name": "malloc2" 00:08:39.250 } 00:08:39.250 } 00:08:39.250 }' 00:08:39.250 17:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:39.250 17:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:39.250 17:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:39.250 17:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:39.508 17:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:39.508 17:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:39.508 17:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:39.508 17:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:39.508 17:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:39.508 17:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:39.508 17:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:39.508 17:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:39.508 17:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:39.508 17:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:08:39.508 [2024-07-15 17:27:35.333890] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:39.766 17:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 8281a7bf-42cf-11ef-96ac-773515fba644 '!=' 8281a7bf-42cf-11ef-96ac-773515fba644 ']' 00:08:39.766 17:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:08:39.766 17:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:39.766 17:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:08:39.766 17:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:08:40.024 [2024-07-15 17:27:35.625871] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:40.024 17:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:40.024 17:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:40.024 17:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:40.024 17:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:40.024 17:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:40.024 17:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:40.024 17:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:40.024 17:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:40.024 17:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:40.024 17:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:40.024 17:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:40.024 17:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:40.282 17:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:40.283 "name": "raid_bdev1", 00:08:40.283 "uuid": "8281a7bf-42cf-11ef-96ac-773515fba644", 00:08:40.283 "strip_size_kb": 0, 00:08:40.283 "state": "online", 00:08:40.283 "raid_level": "raid1", 00:08:40.283 "superblock": true, 00:08:40.283 "num_base_bdevs": 2, 00:08:40.283 "num_base_bdevs_discovered": 1, 00:08:40.283 "num_base_bdevs_operational": 1, 00:08:40.283 "base_bdevs_list": [ 00:08:40.283 { 00:08:40.283 "name": null, 00:08:40.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.283 "is_configured": false, 00:08:40.283 "data_offset": 2048, 00:08:40.283 "data_size": 63488 00:08:40.283 }, 00:08:40.283 { 00:08:40.283 "name": "pt2", 00:08:40.283 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:40.283 "is_configured": true, 00:08:40.283 "data_offset": 2048, 00:08:40.283 "data_size": 63488 00:08:40.283 } 00:08:40.283 ] 00:08:40.283 }' 00:08:40.283 17:27:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:40.283 17:27:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.541 17:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:40.799 [2024-07-15 17:27:36.393908] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:40.799 [2024-07-15 17:27:36.393936] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:40.799 [2024-07-15 17:27:36.393974] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:40.799 [2024-07-15 17:27:36.393986] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:40.799 [2024-07-15 17:27:36.393991] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18f90ba35180 name raid_bdev1, state offline 00:08:40.799 17:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:40.799 17:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:08:41.059 17:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:08:41.059 17:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:08:41.059 17:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:08:41.059 17:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:08:41.059 17:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:41.059 17:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:08:41.059 17:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:08:41.059 17:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:08:41.059 17:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:08:41.059 17:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=1 00:08:41.059 17:27:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:41.319 [2024-07-15 17:27:37.093991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:41.319 [2024-07-15 17:27:37.094058] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:41.319 [2024-07-15 17:27:37.094085] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18f90ba34f00 00:08:41.319 [2024-07-15 17:27:37.094092] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:41.319 [2024-07-15 17:27:37.094795] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:41.319 [2024-07-15 17:27:37.094819] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:41.319 [2024-07-15 17:27:37.094843] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:41.319 [2024-07-15 17:27:37.094855] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:41.319 [2024-07-15 17:27:37.094879] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x18f90ba35180 00:08:41.319 [2024-07-15 17:27:37.094883] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:41.319 [2024-07-15 17:27:37.094903] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18f90ba97e20 00:08:41.320 [2024-07-15 17:27:37.094952] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x18f90ba35180 00:08:41.320 [2024-07-15 17:27:37.094957] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x18f90ba35180 00:08:41.320 [2024-07-15 17:27:37.094977] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:41.320 pt2 00:08:41.320 17:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:41.320 17:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:41.320 17:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:41.320 17:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:41.320 17:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:41.320 17:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:41.320 17:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:41.320 17:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:41.320 17:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:41.320 17:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:41.320 17:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:41.320 17:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:41.591 17:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:41.591 "name": "raid_bdev1", 00:08:41.591 "uuid": "8281a7bf-42cf-11ef-96ac-773515fba644", 00:08:41.591 "strip_size_kb": 0, 00:08:41.591 "state": "online", 00:08:41.591 "raid_level": "raid1", 00:08:41.591 "superblock": true, 00:08:41.591 "num_base_bdevs": 2, 00:08:41.591 "num_base_bdevs_discovered": 1, 00:08:41.591 "num_base_bdevs_operational": 1, 00:08:41.591 "base_bdevs_list": [ 00:08:41.591 { 00:08:41.591 "name": null, 00:08:41.591 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.591 "is_configured": false, 00:08:41.591 "data_offset": 2048, 00:08:41.591 "data_size": 63488 00:08:41.591 }, 00:08:41.591 { 00:08:41.591 "name": "pt2", 00:08:41.591 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:41.591 "is_configured": true, 00:08:41.591 "data_offset": 2048, 00:08:41.591 "data_size": 63488 00:08:41.591 } 00:08:41.591 ] 00:08:41.591 }' 00:08:41.591 17:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:41.591 17:27:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.871 17:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:42.129 [2024-07-15 17:27:37.886049] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:42.129 [2024-07-15 17:27:37.886074] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:42.129 [2024-07-15 17:27:37.886111] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:42.129 [2024-07-15 17:27:37.886123] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:42.129 [2024-07-15 17:27:37.886127] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18f90ba35180 name raid_bdev1, state offline 00:08:42.129 17:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:42.129 17:27:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:08:42.388 17:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:08:42.388 17:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:08:42.388 17:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:08:42.388 17:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:42.646 [2024-07-15 17:27:38.350123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:42.646 [2024-07-15 17:27:38.350194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:42.646 [2024-07-15 17:27:38.350207] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18f90ba34c80 00:08:42.646 [2024-07-15 17:27:38.350216] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:42.646 [2024-07-15 17:27:38.350904] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:42.646 [2024-07-15 17:27:38.350931] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:42.646 [2024-07-15 17:27:38.350956] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:42.646 [2024-07-15 17:27:38.350968] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:42.646 [2024-07-15 17:27:38.350999] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:42.646 [2024-07-15 17:27:38.351004] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:42.646 [2024-07-15 17:27:38.351009] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18f90ba34780 name raid_bdev1, state configuring 00:08:42.646 [2024-07-15 17:27:38.351020] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:42.646 [2024-07-15 17:27:38.351035] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x18f90ba34780 00:08:42.646 [2024-07-15 17:27:38.351039] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:42.646 [2024-07-15 17:27:38.351058] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18f90ba97e20 00:08:42.646 [2024-07-15 17:27:38.351111] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x18f90ba34780 00:08:42.646 [2024-07-15 17:27:38.351123] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x18f90ba34780 00:08:42.646 [2024-07-15 17:27:38.351145] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:42.646 pt1 00:08:42.646 17:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:08:42.646 17:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:42.646 17:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:42.646 17:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:42.646 17:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:42.646 17:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:42.646 17:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:42.646 17:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:42.646 17:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:42.646 17:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:42.646 17:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:42.646 17:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:42.646 17:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:42.904 17:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:42.904 "name": "raid_bdev1", 00:08:42.904 "uuid": "8281a7bf-42cf-11ef-96ac-773515fba644", 00:08:42.904 "strip_size_kb": 0, 00:08:42.904 "state": "online", 00:08:42.904 "raid_level": "raid1", 00:08:42.904 "superblock": true, 00:08:42.904 "num_base_bdevs": 2, 00:08:42.904 "num_base_bdevs_discovered": 1, 00:08:42.904 "num_base_bdevs_operational": 1, 00:08:42.904 "base_bdevs_list": [ 00:08:42.904 { 00:08:42.904 "name": null, 00:08:42.904 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.904 "is_configured": false, 00:08:42.904 "data_offset": 2048, 00:08:42.904 "data_size": 63488 00:08:42.904 }, 00:08:42.904 { 00:08:42.904 "name": "pt2", 00:08:42.904 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:42.904 "is_configured": true, 00:08:42.904 "data_offset": 2048, 00:08:42.904 "data_size": 63488 00:08:42.904 } 00:08:42.904 ] 00:08:42.904 }' 00:08:42.904 17:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:42.904 17:27:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.162 17:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:08:43.162 17:27:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:43.421 17:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:08:43.421 17:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:43.421 17:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:08:43.680 [2024-07-15 17:27:39.478248] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:43.680 17:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 8281a7bf-42cf-11ef-96ac-773515fba644 '!=' 8281a7bf-42cf-11ef-96ac-773515fba644 ']' 00:08:43.680 17:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 51309 00:08:43.680 17:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 51309 ']' 00:08:43.680 17:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 51309 00:08:43.680 17:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:08:43.680 17:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:08:43.680 17:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 51309 00:08:43.680 17:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:08:43.680 17:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:08:43.680 17:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:08:43.680 killing process with pid 51309 00:08:43.680 17:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51309' 00:08:43.680 17:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 51309 00:08:43.680 [2024-07-15 17:27:39.507394] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:43.680 [2024-07-15 17:27:39.507425] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:43.680 17:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 51309 00:08:43.680 [2024-07-15 17:27:39.507438] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:43.680 [2024-07-15 17:27:39.507442] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18f90ba34780 name raid_bdev1, state offline 00:08:43.939 [2024-07-15 17:27:39.519833] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:43.939 17:27:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:08:43.939 00:08:43.939 real 0m13.346s 00:08:43.939 user 0m23.733s 00:08:43.939 sys 0m2.195s 00:08:43.939 ************************************ 00:08:43.939 END TEST raid_superblock_test 00:08:43.939 ************************************ 00:08:43.939 17:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:43.939 17:27:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.939 17:27:39 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:08:43.939 17:27:39 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:43.939 17:27:39 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:43.939 17:27:39 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.939 17:27:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:43.939 ************************************ 00:08:43.939 START TEST raid_read_error_test 00:08:43.939 ************************************ 00:08:43.939 17:27:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 read 00:08:43.939 17:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:08:43.939 17:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:08:43.939 17:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:08:43.939 17:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:08:43.939 17:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:43.939 17:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:08:43.939 17:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:08:43.939 17:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:43.939 17:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:08:43.939 17:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:08:43.939 17:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:43.939 17:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:43.939 17:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:08:43.939 17:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:08:43.939 17:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:08:43.939 17:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:08:43.939 17:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:08:43.939 17:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:08:43.939 17:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:08:43.939 17:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:08:43.939 17:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:08:43.939 17:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.NoMS5FwscO 00:08:43.939 17:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=51698 00:08:43.939 17:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 51698 /var/tmp/spdk-raid.sock 00:08:43.939 17:27:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 51698 ']' 00:08:43.939 17:27:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:43.939 17:27:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:43.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:43.939 17:27:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:43.939 17:27:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:43.939 17:27:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.939 17:27:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:43.939 [2024-07-15 17:27:39.760642] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:08:43.939 [2024-07-15 17:27:39.760808] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:44.504 EAL: TSC is not safe to use in SMP mode 00:08:44.504 EAL: TSC is not invariant 00:08:44.504 [2024-07-15 17:27:40.293735] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.762 [2024-07-15 17:27:40.384274] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:44.762 [2024-07-15 17:27:40.386412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:44.762 [2024-07-15 17:27:40.387233] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:44.762 [2024-07-15 17:27:40.387246] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:45.020 17:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:45.020 17:27:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:08:45.020 17:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:08:45.020 17:27:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:45.277 BaseBdev1_malloc 00:08:45.277 17:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:08:45.535 true 00:08:45.535 17:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:45.793 [2024-07-15 17:27:41.595053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:45.793 [2024-07-15 17:27:41.595135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:45.793 [2024-07-15 17:27:41.595207] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa9344634780 00:08:45.793 [2024-07-15 17:27:41.595218] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:45.793 [2024-07-15 17:27:41.595900] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:45.793 [2024-07-15 17:27:41.595927] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:45.793 BaseBdev1 00:08:45.793 17:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:08:45.793 17:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:46.050 BaseBdev2_malloc 00:08:46.050 17:27:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:08:46.308 true 00:08:46.565 17:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:46.565 [2024-07-15 17:27:42.383084] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:46.565 [2024-07-15 17:27:42.383172] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:46.565 [2024-07-15 17:27:42.383217] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa9344634c80 00:08:46.565 [2024-07-15 17:27:42.383226] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:46.565 [2024-07-15 17:27:42.383906] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:46.565 [2024-07-15 17:27:42.383930] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:46.565 BaseBdev2 00:08:46.824 17:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:08:46.824 [2024-07-15 17:27:42.619101] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:46.824 [2024-07-15 17:27:42.619777] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:46.824 [2024-07-15 17:27:42.619846] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xa9344634f00 00:08:46.824 [2024-07-15 17:27:42.619852] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:46.824 [2024-07-15 17:27:42.619887] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa93446a0e20 00:08:46.824 [2024-07-15 17:27:42.619963] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xa9344634f00 00:08:46.824 [2024-07-15 17:27:42.619968] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xa9344634f00 00:08:46.824 [2024-07-15 17:27:42.619997] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:46.824 17:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:46.824 17:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:46.824 17:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:46.824 17:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:46.824 17:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:46.824 17:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:46.824 17:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:46.824 17:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:46.824 17:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:46.824 17:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:46.824 17:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:46.824 17:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:47.082 17:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:47.082 "name": "raid_bdev1", 00:08:47.082 "uuid": "8acdf234-42cf-11ef-96ac-773515fba644", 00:08:47.082 "strip_size_kb": 0, 00:08:47.082 "state": "online", 00:08:47.082 "raid_level": "raid1", 00:08:47.082 "superblock": true, 00:08:47.082 "num_base_bdevs": 2, 00:08:47.082 "num_base_bdevs_discovered": 2, 00:08:47.082 "num_base_bdevs_operational": 2, 00:08:47.082 "base_bdevs_list": [ 00:08:47.082 { 00:08:47.082 "name": "BaseBdev1", 00:08:47.082 "uuid": "55836ae0-4bde-c551-8556-0bb0fd4e2f23", 00:08:47.082 "is_configured": true, 00:08:47.082 "data_offset": 2048, 00:08:47.082 "data_size": 63488 00:08:47.082 }, 00:08:47.082 { 00:08:47.082 "name": "BaseBdev2", 00:08:47.082 "uuid": "2035b4b6-147e-ea51-8b94-f6f7e9f88b99", 00:08:47.082 "is_configured": true, 00:08:47.082 "data_offset": 2048, 00:08:47.082 "data_size": 63488 00:08:47.082 } 00:08:47.082 ] 00:08:47.082 }' 00:08:47.082 17:27:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:47.082 17:27:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.647 17:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:08:47.647 17:27:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:08:47.647 [2024-07-15 17:27:43.291383] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa93446a0ec0 00:08:48.581 17:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:48.839 17:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:08:48.839 17:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:48.839 17:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:08:48.839 17:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:08:48.839 17:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:48.839 17:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:48.839 17:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:48.839 17:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:48.839 17:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:48.839 17:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:48.839 17:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:48.839 17:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:48.839 17:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:48.839 17:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:48.839 17:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:48.839 17:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:49.096 17:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:49.096 "name": "raid_bdev1", 00:08:49.096 "uuid": "8acdf234-42cf-11ef-96ac-773515fba644", 00:08:49.096 "strip_size_kb": 0, 00:08:49.096 "state": "online", 00:08:49.096 "raid_level": "raid1", 00:08:49.096 "superblock": true, 00:08:49.096 "num_base_bdevs": 2, 00:08:49.096 "num_base_bdevs_discovered": 2, 00:08:49.096 "num_base_bdevs_operational": 2, 00:08:49.096 "base_bdevs_list": [ 00:08:49.096 { 00:08:49.096 "name": "BaseBdev1", 00:08:49.096 "uuid": "55836ae0-4bde-c551-8556-0bb0fd4e2f23", 00:08:49.096 "is_configured": true, 00:08:49.096 "data_offset": 2048, 00:08:49.096 "data_size": 63488 00:08:49.096 }, 00:08:49.096 { 00:08:49.096 "name": "BaseBdev2", 00:08:49.096 "uuid": "2035b4b6-147e-ea51-8b94-f6f7e9f88b99", 00:08:49.096 "is_configured": true, 00:08:49.096 "data_offset": 2048, 00:08:49.096 "data_size": 63488 00:08:49.096 } 00:08:49.096 ] 00:08:49.096 }' 00:08:49.096 17:27:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:49.096 17:27:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.354 17:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:49.613 [2024-07-15 17:27:45.359666] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:49.613 [2024-07-15 17:27:45.359696] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:49.613 [2024-07-15 17:27:45.360063] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:49.613 [2024-07-15 17:27:45.360073] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:49.613 [2024-07-15 17:27:45.360087] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:49.613 [2024-07-15 17:27:45.360091] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa9344634f00 name raid_bdev1, state offline 00:08:49.613 0 00:08:49.613 17:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 51698 00:08:49.613 17:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 51698 ']' 00:08:49.613 17:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 51698 00:08:49.613 17:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:08:49.613 17:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:08:49.613 17:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 51698 00:08:49.613 17:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:08:49.613 17:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:08:49.613 17:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:08:49.613 killing process with pid 51698 00:08:49.613 17:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51698' 00:08:49.613 17:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 51698 00:08:49.613 [2024-07-15 17:27:45.393920] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:49.613 17:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 51698 00:08:49.613 [2024-07-15 17:27:45.406154] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:49.872 17:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:08:49.872 17:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.NoMS5FwscO 00:08:49.872 17:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:08:49.872 17:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:08:49.872 17:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:08:49.872 17:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:49.872 17:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:08:49.872 17:27:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:49.872 00:08:49.872 real 0m5.851s 00:08:49.872 user 0m8.855s 00:08:49.872 sys 0m1.103s 00:08:49.872 17:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:49.872 17:27:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.872 ************************************ 00:08:49.872 END TEST raid_read_error_test 00:08:49.872 ************************************ 00:08:49.872 17:27:45 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:08:49.872 17:27:45 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:08:49.872 17:27:45 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:49.872 17:27:45 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:49.872 17:27:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:49.872 ************************************ 00:08:49.872 START TEST raid_write_error_test 00:08:49.872 ************************************ 00:08:49.872 17:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 write 00:08:49.872 17:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:08:49.872 17:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:08:49.872 17:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:08:49.872 17:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:08:49.872 17:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:49.872 17:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:08:49.872 17:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:08:49.872 17:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:49.872 17:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:08:49.872 17:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:08:49.872 17:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:49.872 17:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:49.872 17:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:08:49.872 17:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:08:49.872 17:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:08:49.872 17:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:08:49.872 17:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:08:49.872 17:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:08:49.872 17:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:08:49.872 17:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:08:49.872 17:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:08:49.872 17:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.Uxo6n1PMw4 00:08:49.872 17:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=51826 00:08:49.872 17:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 51826 /var/tmp/spdk-raid.sock 00:08:49.872 17:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 51826 ']' 00:08:49.872 17:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:49.872 17:27:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:49.872 17:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:49.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:49.872 17:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:49.872 17:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:49.872 17:27:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.872 [2024-07-15 17:27:45.660617] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:08:49.872 [2024-07-15 17:27:45.660859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:50.439 EAL: TSC is not safe to use in SMP mode 00:08:50.439 EAL: TSC is not invariant 00:08:50.439 [2024-07-15 17:27:46.193275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.699 [2024-07-15 17:27:46.283029] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:50.699 [2024-07-15 17:27:46.285248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.699 [2024-07-15 17:27:46.286039] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:50.699 [2024-07-15 17:27:46.286053] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:50.956 17:27:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:50.957 17:27:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:08:50.957 17:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:08:50.957 17:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:51.214 BaseBdev1_malloc 00:08:51.214 17:27:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:08:51.472 true 00:08:51.472 17:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:51.730 [2024-07-15 17:27:47.454779] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:51.730 [2024-07-15 17:27:47.454862] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.730 [2024-07-15 17:27:47.454902] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x764c5234780 00:08:51.730 [2024-07-15 17:27:47.454911] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.730 [2024-07-15 17:27:47.455599] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.730 [2024-07-15 17:27:47.455625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:51.730 BaseBdev1 00:08:51.730 17:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:08:51.730 17:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:51.988 BaseBdev2_malloc 00:08:51.988 17:27:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:08:52.246 true 00:08:52.246 17:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:52.503 [2024-07-15 17:27:48.234849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:52.503 [2024-07-15 17:27:48.234925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.503 [2024-07-15 17:27:48.234971] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x764c5234c80 00:08:52.503 [2024-07-15 17:27:48.234979] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.503 [2024-07-15 17:27:48.235684] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.503 [2024-07-15 17:27:48.235709] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:52.503 BaseBdev2 00:08:52.503 17:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:08:52.762 [2024-07-15 17:27:48.466856] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:52.762 [2024-07-15 17:27:48.467488] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:52.762 [2024-07-15 17:27:48.467558] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x764c5234f00 00:08:52.762 [2024-07-15 17:27:48.467565] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:52.762 [2024-07-15 17:27:48.467600] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x764c52a0e20 00:08:52.762 [2024-07-15 17:27:48.467679] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x764c5234f00 00:08:52.762 [2024-07-15 17:27:48.467684] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x764c5234f00 00:08:52.762 [2024-07-15 17:27:48.467713] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:52.762 17:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:52.762 17:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:52.762 17:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:52.762 17:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:52.762 17:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:52.762 17:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:52.762 17:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:52.762 17:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:52.762 17:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:52.762 17:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:52.762 17:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:52.762 17:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:53.020 17:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:53.020 "name": "raid_bdev1", 00:08:53.020 "uuid": "8e4a3e32-42cf-11ef-96ac-773515fba644", 00:08:53.020 "strip_size_kb": 0, 00:08:53.020 "state": "online", 00:08:53.020 "raid_level": "raid1", 00:08:53.020 "superblock": true, 00:08:53.020 "num_base_bdevs": 2, 00:08:53.020 "num_base_bdevs_discovered": 2, 00:08:53.020 "num_base_bdevs_operational": 2, 00:08:53.020 "base_bdevs_list": [ 00:08:53.020 { 00:08:53.020 "name": "BaseBdev1", 00:08:53.020 "uuid": "e6a85956-217d-1c57-8379-800b4ff12a8d", 00:08:53.020 "is_configured": true, 00:08:53.020 "data_offset": 2048, 00:08:53.020 "data_size": 63488 00:08:53.020 }, 00:08:53.020 { 00:08:53.020 "name": "BaseBdev2", 00:08:53.020 "uuid": "b06592a1-70df-0d59-93cd-4b366cc04efa", 00:08:53.020 "is_configured": true, 00:08:53.020 "data_offset": 2048, 00:08:53.020 "data_size": 63488 00:08:53.020 } 00:08:53.020 ] 00:08:53.020 }' 00:08:53.020 17:27:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:53.020 17:27:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.278 17:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:08:53.278 17:27:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:08:53.536 [2024-07-15 17:27:49.183134] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x764c52a0ec0 00:08:54.470 17:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:54.728 [2024-07-15 17:27:50.412575] bdev_raid.c:2222:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:08:54.728 [2024-07-15 17:27:50.412634] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:54.728 [2024-07-15 17:27:50.412772] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x764c52a0ec0 00:08:54.728 17:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:08:54.728 17:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:54.728 17:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:08:54.728 17:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=1 00:08:54.728 17:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:54.728 17:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:54.728 17:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:54.728 17:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:54.728 17:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:54.728 17:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:54.728 17:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:54.728 17:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:54.728 17:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:54.728 17:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:54.728 17:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:54.728 17:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:54.986 17:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:54.986 "name": "raid_bdev1", 00:08:54.986 "uuid": "8e4a3e32-42cf-11ef-96ac-773515fba644", 00:08:54.986 "strip_size_kb": 0, 00:08:54.986 "state": "online", 00:08:54.986 "raid_level": "raid1", 00:08:54.986 "superblock": true, 00:08:54.986 "num_base_bdevs": 2, 00:08:54.986 "num_base_bdevs_discovered": 1, 00:08:54.986 "num_base_bdevs_operational": 1, 00:08:54.986 "base_bdevs_list": [ 00:08:54.986 { 00:08:54.986 "name": null, 00:08:54.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.986 "is_configured": false, 00:08:54.986 "data_offset": 2048, 00:08:54.986 "data_size": 63488 00:08:54.986 }, 00:08:54.986 { 00:08:54.986 "name": "BaseBdev2", 00:08:54.986 "uuid": "b06592a1-70df-0d59-93cd-4b366cc04efa", 00:08:54.986 "is_configured": true, 00:08:54.986 "data_offset": 2048, 00:08:54.986 "data_size": 63488 00:08:54.986 } 00:08:54.986 ] 00:08:54.986 }' 00:08:54.986 17:27:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:54.986 17:27:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.244 17:27:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:55.530 [2024-07-15 17:27:51.237943] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:55.530 [2024-07-15 17:27:51.237969] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:55.530 [2024-07-15 17:27:51.238331] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:55.530 [2024-07-15 17:27:51.238341] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:55.530 [2024-07-15 17:27:51.238352] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:55.530 [2024-07-15 17:27:51.238356] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x764c5234f00 name raid_bdev1, state offline 00:08:55.530 0 00:08:55.530 17:27:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 51826 00:08:55.530 17:27:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 51826 ']' 00:08:55.530 17:27:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 51826 00:08:55.530 17:27:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:08:55.530 17:27:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:08:55.530 17:27:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 51826 00:08:55.530 17:27:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:08:55.530 17:27:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:08:55.530 17:27:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:08:55.530 17:27:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51826' 00:08:55.530 killing process with pid 51826 00:08:55.530 17:27:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 51826 00:08:55.530 [2024-07-15 17:27:51.262682] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:55.530 17:27:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 51826 00:08:55.530 [2024-07-15 17:27:51.273765] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:55.788 17:27:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.Uxo6n1PMw4 00:08:55.788 17:27:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:08:55.788 17:27:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:08:55.788 17:27:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:08:55.788 17:27:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:08:55.788 17:27:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:55.788 17:27:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:08:55.788 17:27:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:08:55.788 00:08:55.788 real 0m5.819s 00:08:55.788 user 0m8.981s 00:08:55.788 sys 0m0.942s 00:08:55.788 17:27:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:55.788 17:27:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.788 ************************************ 00:08:55.788 END TEST raid_write_error_test 00:08:55.788 ************************************ 00:08:55.788 17:27:51 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:08:55.788 17:27:51 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:08:55.788 17:27:51 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:08:55.788 17:27:51 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:08:55.788 17:27:51 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:55.788 17:27:51 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:55.788 17:27:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:55.788 ************************************ 00:08:55.788 START TEST raid_state_function_test 00:08:55.788 ************************************ 00:08:55.788 17:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 false 00:08:55.788 17:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:08:55.788 17:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:08:55.788 17:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:08:55.788 17:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:08:55.788 17:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:08:55.788 17:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:55.788 17:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:08:55.788 17:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:55.788 17:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:55.788 17:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:08:55.788 17:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:55.788 17:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:55.788 17:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:08:55.788 17:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:55.788 17:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:55.788 17:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:08:55.788 17:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:08:55.788 17:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:08:55.788 17:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:08:55.788 17:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:08:55.788 17:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:08:55.788 17:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:08:55.788 17:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:08:55.788 17:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:08:55.788 17:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:08:55.788 17:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:08:55.788 17:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=51948 00:08:55.788 Process raid pid: 51948 00:08:55.788 17:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:55.789 17:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 51948' 00:08:55.789 17:27:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 51948 /var/tmp/spdk-raid.sock 00:08:55.789 17:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 51948 ']' 00:08:55.789 17:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:55.789 17:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:55.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:55.789 17:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:55.789 17:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:55.789 17:27:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.789 [2024-07-15 17:27:51.519886] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:08:55.789 [2024-07-15 17:27:51.520145] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:56.355 EAL: TSC is not safe to use in SMP mode 00:08:56.355 EAL: TSC is not invariant 00:08:56.355 [2024-07-15 17:27:52.095345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.612 [2024-07-15 17:27:52.189842] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:56.612 [2024-07-15 17:27:52.192065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.612 [2024-07-15 17:27:52.192865] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:56.612 [2024-07-15 17:27:52.192880] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:56.869 17:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:56.870 17:27:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:08:56.870 17:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:57.127 [2024-07-15 17:27:52.837514] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:57.127 [2024-07-15 17:27:52.837567] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:57.127 [2024-07-15 17:27:52.837572] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:57.127 [2024-07-15 17:27:52.837581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:57.127 [2024-07-15 17:27:52.837585] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:57.127 [2024-07-15 17:27:52.837592] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:57.127 17:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:57.127 17:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:57.127 17:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:57.127 17:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:57.127 17:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:57.127 17:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:57.127 17:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:57.127 17:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:57.127 17:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:57.127 17:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:57.127 17:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:57.127 17:27:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:57.385 17:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:57.385 "name": "Existed_Raid", 00:08:57.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.385 "strip_size_kb": 64, 00:08:57.385 "state": "configuring", 00:08:57.385 "raid_level": "raid0", 00:08:57.385 "superblock": false, 00:08:57.385 "num_base_bdevs": 3, 00:08:57.385 "num_base_bdevs_discovered": 0, 00:08:57.385 "num_base_bdevs_operational": 3, 00:08:57.385 "base_bdevs_list": [ 00:08:57.385 { 00:08:57.385 "name": "BaseBdev1", 00:08:57.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.385 "is_configured": false, 00:08:57.385 "data_offset": 0, 00:08:57.385 "data_size": 0 00:08:57.385 }, 00:08:57.385 { 00:08:57.385 "name": "BaseBdev2", 00:08:57.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.385 "is_configured": false, 00:08:57.385 "data_offset": 0, 00:08:57.385 "data_size": 0 00:08:57.385 }, 00:08:57.385 { 00:08:57.385 "name": "BaseBdev3", 00:08:57.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.385 "is_configured": false, 00:08:57.385 "data_offset": 0, 00:08:57.385 "data_size": 0 00:08:57.385 } 00:08:57.385 ] 00:08:57.385 }' 00:08:57.385 17:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:57.385 17:27:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.949 17:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:57.949 [2024-07-15 17:27:53.697593] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:57.949 [2024-07-15 17:27:53.697622] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2a8b3d634500 name Existed_Raid, state configuring 00:08:57.949 17:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:08:58.206 [2024-07-15 17:27:53.933613] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:58.206 [2024-07-15 17:27:53.933678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:58.206 [2024-07-15 17:27:53.933683] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:58.206 [2024-07-15 17:27:53.933707] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:58.206 [2024-07-15 17:27:53.933710] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:08:58.206 [2024-07-15 17:27:53.933717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:08:58.206 17:27:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:58.463 [2024-07-15 17:27:54.178675] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:58.463 BaseBdev1 00:08:58.463 17:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:08:58.463 17:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:08:58.463 17:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:58.463 17:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:08:58.463 17:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:58.463 17:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:58.463 17:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:58.721 17:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:58.978 [ 00:08:58.978 { 00:08:58.978 "name": "BaseBdev1", 00:08:58.978 "aliases": [ 00:08:58.978 "91b1a435-42cf-11ef-96ac-773515fba644" 00:08:58.978 ], 00:08:58.978 "product_name": "Malloc disk", 00:08:58.978 "block_size": 512, 00:08:58.978 "num_blocks": 65536, 00:08:58.978 "uuid": "91b1a435-42cf-11ef-96ac-773515fba644", 00:08:58.978 "assigned_rate_limits": { 00:08:58.978 "rw_ios_per_sec": 0, 00:08:58.978 "rw_mbytes_per_sec": 0, 00:08:58.978 "r_mbytes_per_sec": 0, 00:08:58.978 "w_mbytes_per_sec": 0 00:08:58.978 }, 00:08:58.978 "claimed": true, 00:08:58.978 "claim_type": "exclusive_write", 00:08:58.978 "zoned": false, 00:08:58.978 "supported_io_types": { 00:08:58.978 "read": true, 00:08:58.978 "write": true, 00:08:58.978 "unmap": true, 00:08:58.978 "flush": true, 00:08:58.978 "reset": true, 00:08:58.978 "nvme_admin": false, 00:08:58.978 "nvme_io": false, 00:08:58.978 "nvme_io_md": false, 00:08:58.978 "write_zeroes": true, 00:08:58.978 "zcopy": true, 00:08:58.978 "get_zone_info": false, 00:08:58.978 "zone_management": false, 00:08:58.978 "zone_append": false, 00:08:58.978 "compare": false, 00:08:58.978 "compare_and_write": false, 00:08:58.978 "abort": true, 00:08:58.978 "seek_hole": false, 00:08:58.978 "seek_data": false, 00:08:58.978 "copy": true, 00:08:58.978 "nvme_iov_md": false 00:08:58.978 }, 00:08:58.978 "memory_domains": [ 00:08:58.978 { 00:08:58.978 "dma_device_id": "system", 00:08:58.978 "dma_device_type": 1 00:08:58.978 }, 00:08:58.978 { 00:08:58.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:58.978 "dma_device_type": 2 00:08:58.978 } 00:08:58.978 ], 00:08:58.978 "driver_specific": {} 00:08:58.978 } 00:08:58.978 ] 00:08:58.978 17:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:08:58.978 17:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:08:58.978 17:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:58.978 17:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:58.978 17:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:08:58.978 17:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:58.978 17:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:08:58.978 17:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:58.978 17:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:58.978 17:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:58.978 17:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:58.978 17:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:58.978 17:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:59.236 17:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:59.236 "name": "Existed_Raid", 00:08:59.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.236 "strip_size_kb": 64, 00:08:59.236 "state": "configuring", 00:08:59.236 "raid_level": "raid0", 00:08:59.236 "superblock": false, 00:08:59.236 "num_base_bdevs": 3, 00:08:59.236 "num_base_bdevs_discovered": 1, 00:08:59.236 "num_base_bdevs_operational": 3, 00:08:59.236 "base_bdevs_list": [ 00:08:59.236 { 00:08:59.236 "name": "BaseBdev1", 00:08:59.236 "uuid": "91b1a435-42cf-11ef-96ac-773515fba644", 00:08:59.236 "is_configured": true, 00:08:59.236 "data_offset": 0, 00:08:59.236 "data_size": 65536 00:08:59.236 }, 00:08:59.236 { 00:08:59.236 "name": "BaseBdev2", 00:08:59.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.236 "is_configured": false, 00:08:59.236 "data_offset": 0, 00:08:59.236 "data_size": 0 00:08:59.236 }, 00:08:59.236 { 00:08:59.236 "name": "BaseBdev3", 00:08:59.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:59.236 "is_configured": false, 00:08:59.236 "data_offset": 0, 00:08:59.236 "data_size": 0 00:08:59.236 } 00:08:59.236 ] 00:08:59.236 }' 00:08:59.236 17:27:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:59.236 17:27:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.494 17:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:59.752 [2024-07-15 17:27:55.525664] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:59.752 [2024-07-15 17:27:55.525694] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2a8b3d634500 name Existed_Raid, state configuring 00:08:59.752 17:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:00.009 [2024-07-15 17:27:55.813705] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:00.009 [2024-07-15 17:27:55.814590] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:00.009 [2024-07-15 17:27:55.814660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:00.009 [2024-07-15 17:27:55.814679] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:00.009 [2024-07-15 17:27:55.814687] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:00.009 17:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:09:00.009 17:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:00.009 17:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:00.009 17:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:00.009 17:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:00.009 17:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:00.009 17:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:00.009 17:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:00.009 17:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:00.009 17:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:00.009 17:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:00.009 17:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:00.009 17:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:00.009 17:27:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:00.574 17:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:00.574 "name": "Existed_Raid", 00:09:00.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.574 "strip_size_kb": 64, 00:09:00.574 "state": "configuring", 00:09:00.574 "raid_level": "raid0", 00:09:00.574 "superblock": false, 00:09:00.574 "num_base_bdevs": 3, 00:09:00.574 "num_base_bdevs_discovered": 1, 00:09:00.574 "num_base_bdevs_operational": 3, 00:09:00.574 "base_bdevs_list": [ 00:09:00.574 { 00:09:00.574 "name": "BaseBdev1", 00:09:00.574 "uuid": "91b1a435-42cf-11ef-96ac-773515fba644", 00:09:00.574 "is_configured": true, 00:09:00.574 "data_offset": 0, 00:09:00.574 "data_size": 65536 00:09:00.574 }, 00:09:00.574 { 00:09:00.574 "name": "BaseBdev2", 00:09:00.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.574 "is_configured": false, 00:09:00.574 "data_offset": 0, 00:09:00.574 "data_size": 0 00:09:00.574 }, 00:09:00.574 { 00:09:00.574 "name": "BaseBdev3", 00:09:00.574 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:00.574 "is_configured": false, 00:09:00.574 "data_offset": 0, 00:09:00.574 "data_size": 0 00:09:00.574 } 00:09:00.574 ] 00:09:00.574 }' 00:09:00.574 17:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:00.574 17:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.831 17:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:00.831 [2024-07-15 17:27:56.661904] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:01.089 BaseBdev2 00:09:01.089 17:27:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:09:01.090 17:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:09:01.090 17:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:01.090 17:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:01.090 17:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:01.090 17:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:01.090 17:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:01.346 17:27:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:01.604 [ 00:09:01.604 { 00:09:01.604 "name": "BaseBdev2", 00:09:01.604 "aliases": [ 00:09:01.604 "932cb0d5-42cf-11ef-96ac-773515fba644" 00:09:01.604 ], 00:09:01.604 "product_name": "Malloc disk", 00:09:01.604 "block_size": 512, 00:09:01.604 "num_blocks": 65536, 00:09:01.604 "uuid": "932cb0d5-42cf-11ef-96ac-773515fba644", 00:09:01.604 "assigned_rate_limits": { 00:09:01.604 "rw_ios_per_sec": 0, 00:09:01.604 "rw_mbytes_per_sec": 0, 00:09:01.604 "r_mbytes_per_sec": 0, 00:09:01.604 "w_mbytes_per_sec": 0 00:09:01.604 }, 00:09:01.604 "claimed": true, 00:09:01.604 "claim_type": "exclusive_write", 00:09:01.604 "zoned": false, 00:09:01.604 "supported_io_types": { 00:09:01.604 "read": true, 00:09:01.604 "write": true, 00:09:01.604 "unmap": true, 00:09:01.604 "flush": true, 00:09:01.604 "reset": true, 00:09:01.604 "nvme_admin": false, 00:09:01.604 "nvme_io": false, 00:09:01.604 "nvme_io_md": false, 00:09:01.604 "write_zeroes": true, 00:09:01.604 "zcopy": true, 00:09:01.604 "get_zone_info": false, 00:09:01.604 "zone_management": false, 00:09:01.604 "zone_append": false, 00:09:01.604 "compare": false, 00:09:01.604 "compare_and_write": false, 00:09:01.604 "abort": true, 00:09:01.604 "seek_hole": false, 00:09:01.604 "seek_data": false, 00:09:01.604 "copy": true, 00:09:01.604 "nvme_iov_md": false 00:09:01.604 }, 00:09:01.604 "memory_domains": [ 00:09:01.604 { 00:09:01.604 "dma_device_id": "system", 00:09:01.604 "dma_device_type": 1 00:09:01.604 }, 00:09:01.604 { 00:09:01.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.604 "dma_device_type": 2 00:09:01.604 } 00:09:01.604 ], 00:09:01.604 "driver_specific": {} 00:09:01.604 } 00:09:01.604 ] 00:09:01.604 17:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:01.604 17:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:01.604 17:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:01.604 17:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:01.604 17:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:01.604 17:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:01.604 17:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:01.604 17:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:01.604 17:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:01.604 17:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:01.604 17:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:01.604 17:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:01.604 17:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:01.604 17:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:01.604 17:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.862 17:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:01.862 "name": "Existed_Raid", 00:09:01.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.862 "strip_size_kb": 64, 00:09:01.862 "state": "configuring", 00:09:01.862 "raid_level": "raid0", 00:09:01.862 "superblock": false, 00:09:01.862 "num_base_bdevs": 3, 00:09:01.862 "num_base_bdevs_discovered": 2, 00:09:01.862 "num_base_bdevs_operational": 3, 00:09:01.862 "base_bdevs_list": [ 00:09:01.862 { 00:09:01.862 "name": "BaseBdev1", 00:09:01.862 "uuid": "91b1a435-42cf-11ef-96ac-773515fba644", 00:09:01.862 "is_configured": true, 00:09:01.862 "data_offset": 0, 00:09:01.862 "data_size": 65536 00:09:01.862 }, 00:09:01.862 { 00:09:01.862 "name": "BaseBdev2", 00:09:01.862 "uuid": "932cb0d5-42cf-11ef-96ac-773515fba644", 00:09:01.862 "is_configured": true, 00:09:01.862 "data_offset": 0, 00:09:01.862 "data_size": 65536 00:09:01.862 }, 00:09:01.862 { 00:09:01.862 "name": "BaseBdev3", 00:09:01.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.862 "is_configured": false, 00:09:01.862 "data_offset": 0, 00:09:01.862 "data_size": 0 00:09:01.862 } 00:09:01.862 ] 00:09:01.862 }' 00:09:01.862 17:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:01.862 17:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.119 17:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:02.376 [2024-07-15 17:27:57.982013] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:02.376 [2024-07-15 17:27:57.982039] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2a8b3d634a00 00:09:02.376 [2024-07-15 17:27:57.982043] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:02.376 [2024-07-15 17:27:57.982096] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2a8b3d697e20 00:09:02.376 [2024-07-15 17:27:57.982183] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2a8b3d634a00 00:09:02.376 [2024-07-15 17:27:57.982188] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2a8b3d634a00 00:09:02.376 [2024-07-15 17:27:57.982223] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:02.376 BaseBdev3 00:09:02.376 17:27:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:09:02.376 17:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:09:02.376 17:27:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:02.376 17:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:02.376 17:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:02.376 17:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:02.376 17:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:02.634 17:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:02.892 [ 00:09:02.892 { 00:09:02.892 "name": "BaseBdev3", 00:09:02.892 "aliases": [ 00:09:02.892 "93f61e38-42cf-11ef-96ac-773515fba644" 00:09:02.892 ], 00:09:02.892 "product_name": "Malloc disk", 00:09:02.892 "block_size": 512, 00:09:02.892 "num_blocks": 65536, 00:09:02.892 "uuid": "93f61e38-42cf-11ef-96ac-773515fba644", 00:09:02.892 "assigned_rate_limits": { 00:09:02.892 "rw_ios_per_sec": 0, 00:09:02.892 "rw_mbytes_per_sec": 0, 00:09:02.892 "r_mbytes_per_sec": 0, 00:09:02.892 "w_mbytes_per_sec": 0 00:09:02.892 }, 00:09:02.892 "claimed": true, 00:09:02.892 "claim_type": "exclusive_write", 00:09:02.892 "zoned": false, 00:09:02.892 "supported_io_types": { 00:09:02.892 "read": true, 00:09:02.892 "write": true, 00:09:02.892 "unmap": true, 00:09:02.892 "flush": true, 00:09:02.892 "reset": true, 00:09:02.892 "nvme_admin": false, 00:09:02.892 "nvme_io": false, 00:09:02.892 "nvme_io_md": false, 00:09:02.892 "write_zeroes": true, 00:09:02.892 "zcopy": true, 00:09:02.892 "get_zone_info": false, 00:09:02.892 "zone_management": false, 00:09:02.892 "zone_append": false, 00:09:02.892 "compare": false, 00:09:02.892 "compare_and_write": false, 00:09:02.892 "abort": true, 00:09:02.892 "seek_hole": false, 00:09:02.892 "seek_data": false, 00:09:02.892 "copy": true, 00:09:02.892 "nvme_iov_md": false 00:09:02.892 }, 00:09:02.892 "memory_domains": [ 00:09:02.892 { 00:09:02.892 "dma_device_id": "system", 00:09:02.892 "dma_device_type": 1 00:09:02.892 }, 00:09:02.892 { 00:09:02.892 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.892 "dma_device_type": 2 00:09:02.892 } 00:09:02.892 ], 00:09:02.892 "driver_specific": {} 00:09:02.892 } 00:09:02.892 ] 00:09:02.892 17:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:02.892 17:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:02.892 17:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:02.892 17:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:02.892 17:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:02.892 17:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:02.892 17:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:02.892 17:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:02.892 17:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:02.892 17:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:02.892 17:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:02.892 17:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:02.892 17:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:02.892 17:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:02.892 17:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.151 17:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:03.151 "name": "Existed_Raid", 00:09:03.151 "uuid": "93f62660-42cf-11ef-96ac-773515fba644", 00:09:03.151 "strip_size_kb": 64, 00:09:03.151 "state": "online", 00:09:03.151 "raid_level": "raid0", 00:09:03.151 "superblock": false, 00:09:03.151 "num_base_bdevs": 3, 00:09:03.151 "num_base_bdevs_discovered": 3, 00:09:03.151 "num_base_bdevs_operational": 3, 00:09:03.151 "base_bdevs_list": [ 00:09:03.151 { 00:09:03.151 "name": "BaseBdev1", 00:09:03.151 "uuid": "91b1a435-42cf-11ef-96ac-773515fba644", 00:09:03.151 "is_configured": true, 00:09:03.151 "data_offset": 0, 00:09:03.151 "data_size": 65536 00:09:03.151 }, 00:09:03.151 { 00:09:03.151 "name": "BaseBdev2", 00:09:03.151 "uuid": "932cb0d5-42cf-11ef-96ac-773515fba644", 00:09:03.151 "is_configured": true, 00:09:03.151 "data_offset": 0, 00:09:03.151 "data_size": 65536 00:09:03.151 }, 00:09:03.151 { 00:09:03.151 "name": "BaseBdev3", 00:09:03.151 "uuid": "93f61e38-42cf-11ef-96ac-773515fba644", 00:09:03.151 "is_configured": true, 00:09:03.151 "data_offset": 0, 00:09:03.151 "data_size": 65536 00:09:03.151 } 00:09:03.151 ] 00:09:03.151 }' 00:09:03.151 17:27:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:03.151 17:27:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.411 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:09:03.411 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:03.411 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:03.411 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:03.411 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:03.411 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:03.411 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:03.411 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:03.669 [2024-07-15 17:27:59.298005] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:03.669 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:03.669 "name": "Existed_Raid", 00:09:03.669 "aliases": [ 00:09:03.669 "93f62660-42cf-11ef-96ac-773515fba644" 00:09:03.669 ], 00:09:03.669 "product_name": "Raid Volume", 00:09:03.669 "block_size": 512, 00:09:03.669 "num_blocks": 196608, 00:09:03.669 "uuid": "93f62660-42cf-11ef-96ac-773515fba644", 00:09:03.669 "assigned_rate_limits": { 00:09:03.669 "rw_ios_per_sec": 0, 00:09:03.669 "rw_mbytes_per_sec": 0, 00:09:03.669 "r_mbytes_per_sec": 0, 00:09:03.669 "w_mbytes_per_sec": 0 00:09:03.669 }, 00:09:03.669 "claimed": false, 00:09:03.669 "zoned": false, 00:09:03.669 "supported_io_types": { 00:09:03.669 "read": true, 00:09:03.669 "write": true, 00:09:03.669 "unmap": true, 00:09:03.669 "flush": true, 00:09:03.669 "reset": true, 00:09:03.669 "nvme_admin": false, 00:09:03.669 "nvme_io": false, 00:09:03.669 "nvme_io_md": false, 00:09:03.669 "write_zeroes": true, 00:09:03.669 "zcopy": false, 00:09:03.669 "get_zone_info": false, 00:09:03.669 "zone_management": false, 00:09:03.669 "zone_append": false, 00:09:03.669 "compare": false, 00:09:03.669 "compare_and_write": false, 00:09:03.669 "abort": false, 00:09:03.669 "seek_hole": false, 00:09:03.669 "seek_data": false, 00:09:03.669 "copy": false, 00:09:03.669 "nvme_iov_md": false 00:09:03.669 }, 00:09:03.669 "memory_domains": [ 00:09:03.669 { 00:09:03.669 "dma_device_id": "system", 00:09:03.669 "dma_device_type": 1 00:09:03.669 }, 00:09:03.669 { 00:09:03.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.669 "dma_device_type": 2 00:09:03.669 }, 00:09:03.669 { 00:09:03.669 "dma_device_id": "system", 00:09:03.669 "dma_device_type": 1 00:09:03.669 }, 00:09:03.669 { 00:09:03.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.669 "dma_device_type": 2 00:09:03.669 }, 00:09:03.669 { 00:09:03.669 "dma_device_id": "system", 00:09:03.669 "dma_device_type": 1 00:09:03.669 }, 00:09:03.669 { 00:09:03.669 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.669 "dma_device_type": 2 00:09:03.669 } 00:09:03.669 ], 00:09:03.669 "driver_specific": { 00:09:03.669 "raid": { 00:09:03.669 "uuid": "93f62660-42cf-11ef-96ac-773515fba644", 00:09:03.669 "strip_size_kb": 64, 00:09:03.669 "state": "online", 00:09:03.669 "raid_level": "raid0", 00:09:03.669 "superblock": false, 00:09:03.669 "num_base_bdevs": 3, 00:09:03.669 "num_base_bdevs_discovered": 3, 00:09:03.669 "num_base_bdevs_operational": 3, 00:09:03.669 "base_bdevs_list": [ 00:09:03.669 { 00:09:03.669 "name": "BaseBdev1", 00:09:03.669 "uuid": "91b1a435-42cf-11ef-96ac-773515fba644", 00:09:03.669 "is_configured": true, 00:09:03.669 "data_offset": 0, 00:09:03.669 "data_size": 65536 00:09:03.669 }, 00:09:03.669 { 00:09:03.669 "name": "BaseBdev2", 00:09:03.669 "uuid": "932cb0d5-42cf-11ef-96ac-773515fba644", 00:09:03.669 "is_configured": true, 00:09:03.669 "data_offset": 0, 00:09:03.669 "data_size": 65536 00:09:03.669 }, 00:09:03.669 { 00:09:03.669 "name": "BaseBdev3", 00:09:03.669 "uuid": "93f61e38-42cf-11ef-96ac-773515fba644", 00:09:03.669 "is_configured": true, 00:09:03.669 "data_offset": 0, 00:09:03.669 "data_size": 65536 00:09:03.669 } 00:09:03.669 ] 00:09:03.669 } 00:09:03.669 } 00:09:03.669 }' 00:09:03.669 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:03.669 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:09:03.669 BaseBdev2 00:09:03.669 BaseBdev3' 00:09:03.669 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:03.669 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:03.669 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:03.927 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:03.927 "name": "BaseBdev1", 00:09:03.927 "aliases": [ 00:09:03.927 "91b1a435-42cf-11ef-96ac-773515fba644" 00:09:03.927 ], 00:09:03.927 "product_name": "Malloc disk", 00:09:03.927 "block_size": 512, 00:09:03.927 "num_blocks": 65536, 00:09:03.927 "uuid": "91b1a435-42cf-11ef-96ac-773515fba644", 00:09:03.927 "assigned_rate_limits": { 00:09:03.927 "rw_ios_per_sec": 0, 00:09:03.927 "rw_mbytes_per_sec": 0, 00:09:03.927 "r_mbytes_per_sec": 0, 00:09:03.927 "w_mbytes_per_sec": 0 00:09:03.927 }, 00:09:03.927 "claimed": true, 00:09:03.927 "claim_type": "exclusive_write", 00:09:03.927 "zoned": false, 00:09:03.927 "supported_io_types": { 00:09:03.927 "read": true, 00:09:03.927 "write": true, 00:09:03.927 "unmap": true, 00:09:03.927 "flush": true, 00:09:03.927 "reset": true, 00:09:03.927 "nvme_admin": false, 00:09:03.927 "nvme_io": false, 00:09:03.927 "nvme_io_md": false, 00:09:03.927 "write_zeroes": true, 00:09:03.927 "zcopy": true, 00:09:03.927 "get_zone_info": false, 00:09:03.927 "zone_management": false, 00:09:03.927 "zone_append": false, 00:09:03.927 "compare": false, 00:09:03.927 "compare_and_write": false, 00:09:03.927 "abort": true, 00:09:03.927 "seek_hole": false, 00:09:03.927 "seek_data": false, 00:09:03.927 "copy": true, 00:09:03.927 "nvme_iov_md": false 00:09:03.927 }, 00:09:03.927 "memory_domains": [ 00:09:03.927 { 00:09:03.927 "dma_device_id": "system", 00:09:03.927 "dma_device_type": 1 00:09:03.927 }, 00:09:03.927 { 00:09:03.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.927 "dma_device_type": 2 00:09:03.927 } 00:09:03.927 ], 00:09:03.927 "driver_specific": {} 00:09:03.927 }' 00:09:03.927 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:03.927 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:03.927 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:03.927 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:03.927 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:03.927 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:03.927 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:03.927 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:03.927 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:03.928 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:03.928 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:03.928 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:03.928 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:03.928 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:03.928 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:04.185 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:04.185 "name": "BaseBdev2", 00:09:04.185 "aliases": [ 00:09:04.185 "932cb0d5-42cf-11ef-96ac-773515fba644" 00:09:04.185 ], 00:09:04.185 "product_name": "Malloc disk", 00:09:04.185 "block_size": 512, 00:09:04.185 "num_blocks": 65536, 00:09:04.185 "uuid": "932cb0d5-42cf-11ef-96ac-773515fba644", 00:09:04.185 "assigned_rate_limits": { 00:09:04.185 "rw_ios_per_sec": 0, 00:09:04.185 "rw_mbytes_per_sec": 0, 00:09:04.185 "r_mbytes_per_sec": 0, 00:09:04.185 "w_mbytes_per_sec": 0 00:09:04.185 }, 00:09:04.185 "claimed": true, 00:09:04.185 "claim_type": "exclusive_write", 00:09:04.185 "zoned": false, 00:09:04.185 "supported_io_types": { 00:09:04.185 "read": true, 00:09:04.185 "write": true, 00:09:04.185 "unmap": true, 00:09:04.185 "flush": true, 00:09:04.185 "reset": true, 00:09:04.185 "nvme_admin": false, 00:09:04.185 "nvme_io": false, 00:09:04.185 "nvme_io_md": false, 00:09:04.185 "write_zeroes": true, 00:09:04.185 "zcopy": true, 00:09:04.185 "get_zone_info": false, 00:09:04.185 "zone_management": false, 00:09:04.185 "zone_append": false, 00:09:04.185 "compare": false, 00:09:04.185 "compare_and_write": false, 00:09:04.185 "abort": true, 00:09:04.186 "seek_hole": false, 00:09:04.186 "seek_data": false, 00:09:04.186 "copy": true, 00:09:04.186 "nvme_iov_md": false 00:09:04.186 }, 00:09:04.186 "memory_domains": [ 00:09:04.186 { 00:09:04.186 "dma_device_id": "system", 00:09:04.186 "dma_device_type": 1 00:09:04.186 }, 00:09:04.186 { 00:09:04.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.186 "dma_device_type": 2 00:09:04.186 } 00:09:04.186 ], 00:09:04.186 "driver_specific": {} 00:09:04.186 }' 00:09:04.186 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:04.186 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:04.186 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:04.186 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:04.186 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:04.186 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:04.186 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:04.186 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:04.186 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:04.186 17:27:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:04.186 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:04.186 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:04.186 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:04.186 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:09:04.186 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:04.752 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:04.752 "name": "BaseBdev3", 00:09:04.752 "aliases": [ 00:09:04.752 "93f61e38-42cf-11ef-96ac-773515fba644" 00:09:04.752 ], 00:09:04.752 "product_name": "Malloc disk", 00:09:04.752 "block_size": 512, 00:09:04.752 "num_blocks": 65536, 00:09:04.752 "uuid": "93f61e38-42cf-11ef-96ac-773515fba644", 00:09:04.752 "assigned_rate_limits": { 00:09:04.752 "rw_ios_per_sec": 0, 00:09:04.752 "rw_mbytes_per_sec": 0, 00:09:04.752 "r_mbytes_per_sec": 0, 00:09:04.752 "w_mbytes_per_sec": 0 00:09:04.752 }, 00:09:04.752 "claimed": true, 00:09:04.752 "claim_type": "exclusive_write", 00:09:04.752 "zoned": false, 00:09:04.752 "supported_io_types": { 00:09:04.752 "read": true, 00:09:04.752 "write": true, 00:09:04.752 "unmap": true, 00:09:04.752 "flush": true, 00:09:04.752 "reset": true, 00:09:04.752 "nvme_admin": false, 00:09:04.752 "nvme_io": false, 00:09:04.752 "nvme_io_md": false, 00:09:04.752 "write_zeroes": true, 00:09:04.752 "zcopy": true, 00:09:04.752 "get_zone_info": false, 00:09:04.752 "zone_management": false, 00:09:04.752 "zone_append": false, 00:09:04.752 "compare": false, 00:09:04.752 "compare_and_write": false, 00:09:04.752 "abort": true, 00:09:04.752 "seek_hole": false, 00:09:04.752 "seek_data": false, 00:09:04.752 "copy": true, 00:09:04.752 "nvme_iov_md": false 00:09:04.752 }, 00:09:04.752 "memory_domains": [ 00:09:04.752 { 00:09:04.752 "dma_device_id": "system", 00:09:04.752 "dma_device_type": 1 00:09:04.752 }, 00:09:04.752 { 00:09:04.752 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.752 "dma_device_type": 2 00:09:04.752 } 00:09:04.752 ], 00:09:04.752 "driver_specific": {} 00:09:04.752 }' 00:09:04.752 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:04.752 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:04.752 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:04.752 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:04.752 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:04.752 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:04.752 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:04.752 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:04.752 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:04.752 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:04.752 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:04.752 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:04.752 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:05.010 [2024-07-15 17:28:00.602146] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:05.010 [2024-07-15 17:28:00.602169] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:05.010 [2024-07-15 17:28:00.602199] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:05.010 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:09:05.010 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:09:05.010 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:05.010 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:09:05.010 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:09:05.010 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:05.010 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:05.010 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:09:05.010 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:05.010 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:05.010 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:05.011 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:05.011 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:05.011 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:05.011 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:05.011 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:05.011 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:05.269 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:05.269 "name": "Existed_Raid", 00:09:05.269 "uuid": "93f62660-42cf-11ef-96ac-773515fba644", 00:09:05.269 "strip_size_kb": 64, 00:09:05.269 "state": "offline", 00:09:05.269 "raid_level": "raid0", 00:09:05.269 "superblock": false, 00:09:05.269 "num_base_bdevs": 3, 00:09:05.269 "num_base_bdevs_discovered": 2, 00:09:05.269 "num_base_bdevs_operational": 2, 00:09:05.269 "base_bdevs_list": [ 00:09:05.269 { 00:09:05.269 "name": null, 00:09:05.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:05.269 "is_configured": false, 00:09:05.269 "data_offset": 0, 00:09:05.269 "data_size": 65536 00:09:05.269 }, 00:09:05.269 { 00:09:05.269 "name": "BaseBdev2", 00:09:05.269 "uuid": "932cb0d5-42cf-11ef-96ac-773515fba644", 00:09:05.269 "is_configured": true, 00:09:05.269 "data_offset": 0, 00:09:05.269 "data_size": 65536 00:09:05.269 }, 00:09:05.269 { 00:09:05.269 "name": "BaseBdev3", 00:09:05.269 "uuid": "93f61e38-42cf-11ef-96ac-773515fba644", 00:09:05.269 "is_configured": true, 00:09:05.269 "data_offset": 0, 00:09:05.269 "data_size": 65536 00:09:05.269 } 00:09:05.269 ] 00:09:05.269 }' 00:09:05.269 17:28:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:05.269 17:28:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.527 17:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:09:05.527 17:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:05.527 17:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:05.527 17:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:05.784 17:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:05.784 17:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:05.784 17:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:06.042 [2024-07-15 17:28:01.664631] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:06.042 17:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:06.042 17:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:06.042 17:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:06.042 17:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:06.300 17:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:06.300 17:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:06.300 17:28:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:09:06.558 [2024-07-15 17:28:02.242949] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:06.558 [2024-07-15 17:28:02.242994] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2a8b3d634a00 name Existed_Raid, state offline 00:09:06.558 17:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:06.558 17:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:06.558 17:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:06.558 17:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:09:06.817 17:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:09:06.817 17:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:09:06.817 17:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:09:06.817 17:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:09:06.817 17:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:06.817 17:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:07.080 BaseBdev2 00:09:07.080 17:28:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:09:07.080 17:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:09:07.080 17:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:07.080 17:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:07.080 17:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:07.080 17:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:07.080 17:28:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:07.354 17:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:07.611 [ 00:09:07.611 { 00:09:07.611 "name": "BaseBdev2", 00:09:07.611 "aliases": [ 00:09:07.611 "96cffe21-42cf-11ef-96ac-773515fba644" 00:09:07.611 ], 00:09:07.611 "product_name": "Malloc disk", 00:09:07.611 "block_size": 512, 00:09:07.611 "num_blocks": 65536, 00:09:07.611 "uuid": "96cffe21-42cf-11ef-96ac-773515fba644", 00:09:07.611 "assigned_rate_limits": { 00:09:07.611 "rw_ios_per_sec": 0, 00:09:07.611 "rw_mbytes_per_sec": 0, 00:09:07.611 "r_mbytes_per_sec": 0, 00:09:07.611 "w_mbytes_per_sec": 0 00:09:07.611 }, 00:09:07.611 "claimed": false, 00:09:07.611 "zoned": false, 00:09:07.611 "supported_io_types": { 00:09:07.611 "read": true, 00:09:07.611 "write": true, 00:09:07.611 "unmap": true, 00:09:07.611 "flush": true, 00:09:07.611 "reset": true, 00:09:07.611 "nvme_admin": false, 00:09:07.611 "nvme_io": false, 00:09:07.611 "nvme_io_md": false, 00:09:07.611 "write_zeroes": true, 00:09:07.611 "zcopy": true, 00:09:07.611 "get_zone_info": false, 00:09:07.611 "zone_management": false, 00:09:07.611 "zone_append": false, 00:09:07.611 "compare": false, 00:09:07.611 "compare_and_write": false, 00:09:07.611 "abort": true, 00:09:07.611 "seek_hole": false, 00:09:07.611 "seek_data": false, 00:09:07.611 "copy": true, 00:09:07.611 "nvme_iov_md": false 00:09:07.611 }, 00:09:07.611 "memory_domains": [ 00:09:07.611 { 00:09:07.611 "dma_device_id": "system", 00:09:07.611 "dma_device_type": 1 00:09:07.611 }, 00:09:07.611 { 00:09:07.611 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.611 "dma_device_type": 2 00:09:07.611 } 00:09:07.611 ], 00:09:07.611 "driver_specific": {} 00:09:07.611 } 00:09:07.611 ] 00:09:07.611 17:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:07.611 17:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:09:07.611 17:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:07.611 17:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:07.868 BaseBdev3 00:09:07.868 17:28:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:09:07.868 17:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:09:07.868 17:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:07.868 17:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:07.868 17:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:07.868 17:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:07.868 17:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:08.126 17:28:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:08.692 [ 00:09:08.692 { 00:09:08.692 "name": "BaseBdev3", 00:09:08.692 "aliases": [ 00:09:08.692 "9755aa9e-42cf-11ef-96ac-773515fba644" 00:09:08.692 ], 00:09:08.692 "product_name": "Malloc disk", 00:09:08.692 "block_size": 512, 00:09:08.692 "num_blocks": 65536, 00:09:08.692 "uuid": "9755aa9e-42cf-11ef-96ac-773515fba644", 00:09:08.692 "assigned_rate_limits": { 00:09:08.692 "rw_ios_per_sec": 0, 00:09:08.692 "rw_mbytes_per_sec": 0, 00:09:08.692 "r_mbytes_per_sec": 0, 00:09:08.692 "w_mbytes_per_sec": 0 00:09:08.692 }, 00:09:08.692 "claimed": false, 00:09:08.692 "zoned": false, 00:09:08.692 "supported_io_types": { 00:09:08.692 "read": true, 00:09:08.692 "write": true, 00:09:08.692 "unmap": true, 00:09:08.692 "flush": true, 00:09:08.692 "reset": true, 00:09:08.692 "nvme_admin": false, 00:09:08.692 "nvme_io": false, 00:09:08.692 "nvme_io_md": false, 00:09:08.692 "write_zeroes": true, 00:09:08.692 "zcopy": true, 00:09:08.692 "get_zone_info": false, 00:09:08.692 "zone_management": false, 00:09:08.692 "zone_append": false, 00:09:08.692 "compare": false, 00:09:08.692 "compare_and_write": false, 00:09:08.692 "abort": true, 00:09:08.692 "seek_hole": false, 00:09:08.692 "seek_data": false, 00:09:08.692 "copy": true, 00:09:08.692 "nvme_iov_md": false 00:09:08.692 }, 00:09:08.692 "memory_domains": [ 00:09:08.692 { 00:09:08.692 "dma_device_id": "system", 00:09:08.692 "dma_device_type": 1 00:09:08.692 }, 00:09:08.692 { 00:09:08.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.692 "dma_device_type": 2 00:09:08.692 } 00:09:08.692 ], 00:09:08.692 "driver_specific": {} 00:09:08.692 } 00:09:08.692 ] 00:09:08.692 17:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:08.692 17:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:09:08.692 17:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:08.692 17:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:08.692 [2024-07-15 17:28:04.493246] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:08.692 [2024-07-15 17:28:04.493295] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:08.692 [2024-07-15 17:28:04.493303] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:08.692 [2024-07-15 17:28:04.493876] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:08.692 17:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:08.692 17:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:08.692 17:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:08.692 17:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:08.692 17:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:08.692 17:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:08.692 17:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:08.692 17:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:08.692 17:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:08.692 17:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:08.692 17:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:08.692 17:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.258 17:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:09.258 "name": "Existed_Raid", 00:09:09.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.258 "strip_size_kb": 64, 00:09:09.258 "state": "configuring", 00:09:09.258 "raid_level": "raid0", 00:09:09.258 "superblock": false, 00:09:09.258 "num_base_bdevs": 3, 00:09:09.258 "num_base_bdevs_discovered": 2, 00:09:09.258 "num_base_bdevs_operational": 3, 00:09:09.258 "base_bdevs_list": [ 00:09:09.258 { 00:09:09.258 "name": "BaseBdev1", 00:09:09.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.258 "is_configured": false, 00:09:09.258 "data_offset": 0, 00:09:09.258 "data_size": 0 00:09:09.258 }, 00:09:09.258 { 00:09:09.258 "name": "BaseBdev2", 00:09:09.258 "uuid": "96cffe21-42cf-11ef-96ac-773515fba644", 00:09:09.258 "is_configured": true, 00:09:09.258 "data_offset": 0, 00:09:09.258 "data_size": 65536 00:09:09.258 }, 00:09:09.258 { 00:09:09.258 "name": "BaseBdev3", 00:09:09.258 "uuid": "9755aa9e-42cf-11ef-96ac-773515fba644", 00:09:09.258 "is_configured": true, 00:09:09.258 "data_offset": 0, 00:09:09.258 "data_size": 65536 00:09:09.258 } 00:09:09.258 ] 00:09:09.258 }' 00:09:09.258 17:28:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:09.258 17:28:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.516 17:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:09:09.516 [2024-07-15 17:28:05.333269] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:09.775 17:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:09.775 17:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:09.775 17:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:09.775 17:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:09.775 17:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:09.775 17:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:09.775 17:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:09.775 17:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:09.775 17:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:09.775 17:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:09.775 17:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:09.775 17:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.775 17:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:09.775 "name": "Existed_Raid", 00:09:09.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.775 "strip_size_kb": 64, 00:09:09.775 "state": "configuring", 00:09:09.775 "raid_level": "raid0", 00:09:09.775 "superblock": false, 00:09:09.775 "num_base_bdevs": 3, 00:09:09.775 "num_base_bdevs_discovered": 1, 00:09:09.775 "num_base_bdevs_operational": 3, 00:09:09.775 "base_bdevs_list": [ 00:09:09.775 { 00:09:09.775 "name": "BaseBdev1", 00:09:09.775 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:09.775 "is_configured": false, 00:09:09.775 "data_offset": 0, 00:09:09.775 "data_size": 0 00:09:09.775 }, 00:09:09.775 { 00:09:09.775 "name": null, 00:09:09.775 "uuid": "96cffe21-42cf-11ef-96ac-773515fba644", 00:09:09.775 "is_configured": false, 00:09:09.775 "data_offset": 0, 00:09:09.775 "data_size": 65536 00:09:09.775 }, 00:09:09.775 { 00:09:09.775 "name": "BaseBdev3", 00:09:09.775 "uuid": "9755aa9e-42cf-11ef-96ac-773515fba644", 00:09:09.775 "is_configured": true, 00:09:09.775 "data_offset": 0, 00:09:09.775 "data_size": 65536 00:09:09.775 } 00:09:09.775 ] 00:09:09.775 }' 00:09:09.775 17:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:09.775 17:28:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.340 17:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:10.340 17:28:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:10.340 17:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:09:10.340 17:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:10.598 [2024-07-15 17:28:06.353438] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:10.598 BaseBdev1 00:09:10.598 17:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:09:10.598 17:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:09:10.598 17:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:10.598 17:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:10.598 17:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:10.598 17:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:10.598 17:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:10.856 17:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:11.115 [ 00:09:11.115 { 00:09:11.115 "name": "BaseBdev1", 00:09:11.115 "aliases": [ 00:09:11.115 "98f37f85-42cf-11ef-96ac-773515fba644" 00:09:11.115 ], 00:09:11.115 "product_name": "Malloc disk", 00:09:11.115 "block_size": 512, 00:09:11.115 "num_blocks": 65536, 00:09:11.115 "uuid": "98f37f85-42cf-11ef-96ac-773515fba644", 00:09:11.115 "assigned_rate_limits": { 00:09:11.115 "rw_ios_per_sec": 0, 00:09:11.115 "rw_mbytes_per_sec": 0, 00:09:11.115 "r_mbytes_per_sec": 0, 00:09:11.115 "w_mbytes_per_sec": 0 00:09:11.115 }, 00:09:11.115 "claimed": true, 00:09:11.115 "claim_type": "exclusive_write", 00:09:11.115 "zoned": false, 00:09:11.115 "supported_io_types": { 00:09:11.115 "read": true, 00:09:11.115 "write": true, 00:09:11.115 "unmap": true, 00:09:11.115 "flush": true, 00:09:11.115 "reset": true, 00:09:11.115 "nvme_admin": false, 00:09:11.115 "nvme_io": false, 00:09:11.115 "nvme_io_md": false, 00:09:11.115 "write_zeroes": true, 00:09:11.115 "zcopy": true, 00:09:11.115 "get_zone_info": false, 00:09:11.115 "zone_management": false, 00:09:11.115 "zone_append": false, 00:09:11.115 "compare": false, 00:09:11.115 "compare_and_write": false, 00:09:11.115 "abort": true, 00:09:11.115 "seek_hole": false, 00:09:11.115 "seek_data": false, 00:09:11.115 "copy": true, 00:09:11.115 "nvme_iov_md": false 00:09:11.115 }, 00:09:11.115 "memory_domains": [ 00:09:11.115 { 00:09:11.115 "dma_device_id": "system", 00:09:11.115 "dma_device_type": 1 00:09:11.115 }, 00:09:11.115 { 00:09:11.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:11.115 "dma_device_type": 2 00:09:11.115 } 00:09:11.115 ], 00:09:11.115 "driver_specific": {} 00:09:11.115 } 00:09:11.115 ] 00:09:11.115 17:28:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:11.115 17:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:11.115 17:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:11.115 17:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:11.115 17:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:11.115 17:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:11.115 17:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:11.115 17:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:11.115 17:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:11.115 17:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:11.115 17:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:11.115 17:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:11.115 17:28:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.415 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:11.415 "name": "Existed_Raid", 00:09:11.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.415 "strip_size_kb": 64, 00:09:11.415 "state": "configuring", 00:09:11.415 "raid_level": "raid0", 00:09:11.415 "superblock": false, 00:09:11.415 "num_base_bdevs": 3, 00:09:11.415 "num_base_bdevs_discovered": 2, 00:09:11.415 "num_base_bdevs_operational": 3, 00:09:11.415 "base_bdevs_list": [ 00:09:11.415 { 00:09:11.415 "name": "BaseBdev1", 00:09:11.415 "uuid": "98f37f85-42cf-11ef-96ac-773515fba644", 00:09:11.415 "is_configured": true, 00:09:11.415 "data_offset": 0, 00:09:11.415 "data_size": 65536 00:09:11.415 }, 00:09:11.415 { 00:09:11.415 "name": null, 00:09:11.415 "uuid": "96cffe21-42cf-11ef-96ac-773515fba644", 00:09:11.415 "is_configured": false, 00:09:11.415 "data_offset": 0, 00:09:11.415 "data_size": 65536 00:09:11.415 }, 00:09:11.415 { 00:09:11.415 "name": "BaseBdev3", 00:09:11.415 "uuid": "9755aa9e-42cf-11ef-96ac-773515fba644", 00:09:11.415 "is_configured": true, 00:09:11.415 "data_offset": 0, 00:09:11.415 "data_size": 65536 00:09:11.415 } 00:09:11.415 ] 00:09:11.415 }' 00:09:11.415 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:11.415 17:28:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.694 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:11.694 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:11.952 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:09:11.952 17:28:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:09:12.210 [2024-07-15 17:28:08.013394] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:12.210 17:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:12.210 17:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:12.210 17:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:12.210 17:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:12.210 17:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:12.210 17:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:12.210 17:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:12.210 17:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:12.210 17:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:12.210 17:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:12.210 17:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:12.210 17:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.776 17:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:12.776 "name": "Existed_Raid", 00:09:12.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.776 "strip_size_kb": 64, 00:09:12.776 "state": "configuring", 00:09:12.776 "raid_level": "raid0", 00:09:12.776 "superblock": false, 00:09:12.776 "num_base_bdevs": 3, 00:09:12.776 "num_base_bdevs_discovered": 1, 00:09:12.776 "num_base_bdevs_operational": 3, 00:09:12.776 "base_bdevs_list": [ 00:09:12.776 { 00:09:12.776 "name": "BaseBdev1", 00:09:12.776 "uuid": "98f37f85-42cf-11ef-96ac-773515fba644", 00:09:12.776 "is_configured": true, 00:09:12.776 "data_offset": 0, 00:09:12.776 "data_size": 65536 00:09:12.776 }, 00:09:12.776 { 00:09:12.776 "name": null, 00:09:12.776 "uuid": "96cffe21-42cf-11ef-96ac-773515fba644", 00:09:12.776 "is_configured": false, 00:09:12.776 "data_offset": 0, 00:09:12.776 "data_size": 65536 00:09:12.776 }, 00:09:12.776 { 00:09:12.776 "name": null, 00:09:12.776 "uuid": "9755aa9e-42cf-11ef-96ac-773515fba644", 00:09:12.776 "is_configured": false, 00:09:12.776 "data_offset": 0, 00:09:12.776 "data_size": 65536 00:09:12.776 } 00:09:12.776 ] 00:09:12.776 }' 00:09:12.776 17:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:12.776 17:28:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.034 17:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:13.034 17:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:13.291 17:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:09:13.291 17:28:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:13.550 [2024-07-15 17:28:09.193491] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:13.550 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:13.550 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:13.550 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:13.550 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:13.550 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:13.550 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:13.550 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:13.550 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:13.550 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:13.550 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:13.550 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:13.550 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.808 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:13.808 "name": "Existed_Raid", 00:09:13.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.808 "strip_size_kb": 64, 00:09:13.808 "state": "configuring", 00:09:13.808 "raid_level": "raid0", 00:09:13.808 "superblock": false, 00:09:13.808 "num_base_bdevs": 3, 00:09:13.808 "num_base_bdevs_discovered": 2, 00:09:13.808 "num_base_bdevs_operational": 3, 00:09:13.808 "base_bdevs_list": [ 00:09:13.808 { 00:09:13.808 "name": "BaseBdev1", 00:09:13.808 "uuid": "98f37f85-42cf-11ef-96ac-773515fba644", 00:09:13.808 "is_configured": true, 00:09:13.808 "data_offset": 0, 00:09:13.808 "data_size": 65536 00:09:13.808 }, 00:09:13.808 { 00:09:13.808 "name": null, 00:09:13.808 "uuid": "96cffe21-42cf-11ef-96ac-773515fba644", 00:09:13.808 "is_configured": false, 00:09:13.808 "data_offset": 0, 00:09:13.808 "data_size": 65536 00:09:13.808 }, 00:09:13.808 { 00:09:13.808 "name": "BaseBdev3", 00:09:13.808 "uuid": "9755aa9e-42cf-11ef-96ac-773515fba644", 00:09:13.808 "is_configured": true, 00:09:13.808 "data_offset": 0, 00:09:13.808 "data_size": 65536 00:09:13.808 } 00:09:13.808 ] 00:09:13.808 }' 00:09:13.808 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:13.808 17:28:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.066 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:14.066 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:14.324 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:09:14.324 17:28:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:14.583 [2024-07-15 17:28:10.241532] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:14.583 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:14.583 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:14.583 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:14.583 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:14.583 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:14.583 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:14.583 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:14.583 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:14.583 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:14.583 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:14.583 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:14.583 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.841 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:14.841 "name": "Existed_Raid", 00:09:14.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.841 "strip_size_kb": 64, 00:09:14.841 "state": "configuring", 00:09:14.841 "raid_level": "raid0", 00:09:14.841 "superblock": false, 00:09:14.841 "num_base_bdevs": 3, 00:09:14.841 "num_base_bdevs_discovered": 1, 00:09:14.841 "num_base_bdevs_operational": 3, 00:09:14.841 "base_bdevs_list": [ 00:09:14.841 { 00:09:14.841 "name": null, 00:09:14.841 "uuid": "98f37f85-42cf-11ef-96ac-773515fba644", 00:09:14.841 "is_configured": false, 00:09:14.841 "data_offset": 0, 00:09:14.841 "data_size": 65536 00:09:14.841 }, 00:09:14.841 { 00:09:14.841 "name": null, 00:09:14.841 "uuid": "96cffe21-42cf-11ef-96ac-773515fba644", 00:09:14.841 "is_configured": false, 00:09:14.842 "data_offset": 0, 00:09:14.842 "data_size": 65536 00:09:14.842 }, 00:09:14.842 { 00:09:14.842 "name": "BaseBdev3", 00:09:14.842 "uuid": "9755aa9e-42cf-11ef-96ac-773515fba644", 00:09:14.842 "is_configured": true, 00:09:14.842 "data_offset": 0, 00:09:14.842 "data_size": 65536 00:09:14.842 } 00:09:14.842 ] 00:09:14.842 }' 00:09:14.842 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:14.842 17:28:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.099 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:15.099 17:28:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:15.357 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:09:15.357 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:15.615 [2024-07-15 17:28:11.339332] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:15.615 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:15.615 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:15.615 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:15.615 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:15.615 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:15.615 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:15.615 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:15.615 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:15.615 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:15.615 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:15.615 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:15.615 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.873 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:15.873 "name": "Existed_Raid", 00:09:15.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.873 "strip_size_kb": 64, 00:09:15.873 "state": "configuring", 00:09:15.873 "raid_level": "raid0", 00:09:15.873 "superblock": false, 00:09:15.873 "num_base_bdevs": 3, 00:09:15.873 "num_base_bdevs_discovered": 2, 00:09:15.873 "num_base_bdevs_operational": 3, 00:09:15.873 "base_bdevs_list": [ 00:09:15.873 { 00:09:15.873 "name": null, 00:09:15.873 "uuid": "98f37f85-42cf-11ef-96ac-773515fba644", 00:09:15.873 "is_configured": false, 00:09:15.873 "data_offset": 0, 00:09:15.873 "data_size": 65536 00:09:15.873 }, 00:09:15.873 { 00:09:15.873 "name": "BaseBdev2", 00:09:15.873 "uuid": "96cffe21-42cf-11ef-96ac-773515fba644", 00:09:15.873 "is_configured": true, 00:09:15.873 "data_offset": 0, 00:09:15.873 "data_size": 65536 00:09:15.873 }, 00:09:15.873 { 00:09:15.873 "name": "BaseBdev3", 00:09:15.873 "uuid": "9755aa9e-42cf-11ef-96ac-773515fba644", 00:09:15.873 "is_configured": true, 00:09:15.873 "data_offset": 0, 00:09:15.873 "data_size": 65536 00:09:15.873 } 00:09:15.873 ] 00:09:15.873 }' 00:09:15.873 17:28:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:15.873 17:28:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.438 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:16.438 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:16.438 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:09:16.438 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:16.438 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:16.696 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 98f37f85-42cf-11ef-96ac-773515fba644 00:09:16.955 [2024-07-15 17:28:12.751489] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:16.955 [2024-07-15 17:28:12.751519] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2a8b3d634a00 00:09:16.955 [2024-07-15 17:28:12.751524] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:16.955 [2024-07-15 17:28:12.751547] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2a8b3d697e20 00:09:16.955 [2024-07-15 17:28:12.751619] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2a8b3d634a00 00:09:16.955 [2024-07-15 17:28:12.751624] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2a8b3d634a00 00:09:16.955 [2024-07-15 17:28:12.751657] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:16.955 NewBaseBdev 00:09:16.955 17:28:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:09:16.955 17:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:09:16.955 17:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:16.955 17:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:16.955 17:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:16.955 17:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:16.955 17:28:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:17.521 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:17.521 [ 00:09:17.521 { 00:09:17.521 "name": "NewBaseBdev", 00:09:17.521 "aliases": [ 00:09:17.521 "98f37f85-42cf-11ef-96ac-773515fba644" 00:09:17.521 ], 00:09:17.521 "product_name": "Malloc disk", 00:09:17.521 "block_size": 512, 00:09:17.521 "num_blocks": 65536, 00:09:17.521 "uuid": "98f37f85-42cf-11ef-96ac-773515fba644", 00:09:17.521 "assigned_rate_limits": { 00:09:17.521 "rw_ios_per_sec": 0, 00:09:17.521 "rw_mbytes_per_sec": 0, 00:09:17.521 "r_mbytes_per_sec": 0, 00:09:17.521 "w_mbytes_per_sec": 0 00:09:17.521 }, 00:09:17.521 "claimed": true, 00:09:17.521 "claim_type": "exclusive_write", 00:09:17.521 "zoned": false, 00:09:17.521 "supported_io_types": { 00:09:17.521 "read": true, 00:09:17.521 "write": true, 00:09:17.521 "unmap": true, 00:09:17.521 "flush": true, 00:09:17.521 "reset": true, 00:09:17.521 "nvme_admin": false, 00:09:17.521 "nvme_io": false, 00:09:17.521 "nvme_io_md": false, 00:09:17.521 "write_zeroes": true, 00:09:17.521 "zcopy": true, 00:09:17.521 "get_zone_info": false, 00:09:17.521 "zone_management": false, 00:09:17.521 "zone_append": false, 00:09:17.521 "compare": false, 00:09:17.521 "compare_and_write": false, 00:09:17.521 "abort": true, 00:09:17.521 "seek_hole": false, 00:09:17.521 "seek_data": false, 00:09:17.521 "copy": true, 00:09:17.521 "nvme_iov_md": false 00:09:17.521 }, 00:09:17.521 "memory_domains": [ 00:09:17.521 { 00:09:17.521 "dma_device_id": "system", 00:09:17.521 "dma_device_type": 1 00:09:17.521 }, 00:09:17.521 { 00:09:17.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.521 "dma_device_type": 2 00:09:17.521 } 00:09:17.521 ], 00:09:17.521 "driver_specific": {} 00:09:17.521 } 00:09:17.521 ] 00:09:17.521 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:17.521 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:17.521 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:17.521 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:17.521 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:17.521 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:17.521 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:17.521 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:17.521 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:17.521 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:17.521 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:17.521 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.521 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:18.086 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:18.086 "name": "Existed_Raid", 00:09:18.086 "uuid": "9cc3caa9-42cf-11ef-96ac-773515fba644", 00:09:18.086 "strip_size_kb": 64, 00:09:18.086 "state": "online", 00:09:18.086 "raid_level": "raid0", 00:09:18.086 "superblock": false, 00:09:18.086 "num_base_bdevs": 3, 00:09:18.086 "num_base_bdevs_discovered": 3, 00:09:18.086 "num_base_bdevs_operational": 3, 00:09:18.086 "base_bdevs_list": [ 00:09:18.086 { 00:09:18.086 "name": "NewBaseBdev", 00:09:18.086 "uuid": "98f37f85-42cf-11ef-96ac-773515fba644", 00:09:18.086 "is_configured": true, 00:09:18.086 "data_offset": 0, 00:09:18.086 "data_size": 65536 00:09:18.086 }, 00:09:18.086 { 00:09:18.086 "name": "BaseBdev2", 00:09:18.086 "uuid": "96cffe21-42cf-11ef-96ac-773515fba644", 00:09:18.086 "is_configured": true, 00:09:18.086 "data_offset": 0, 00:09:18.086 "data_size": 65536 00:09:18.086 }, 00:09:18.086 { 00:09:18.086 "name": "BaseBdev3", 00:09:18.086 "uuid": "9755aa9e-42cf-11ef-96ac-773515fba644", 00:09:18.086 "is_configured": true, 00:09:18.086 "data_offset": 0, 00:09:18.086 "data_size": 65536 00:09:18.086 } 00:09:18.086 ] 00:09:18.086 }' 00:09:18.086 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:18.086 17:28:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.345 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:09:18.345 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:18.345 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:18.345 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:18.345 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:18.345 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:18.345 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:18.345 17:28:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:18.603 [2024-07-15 17:28:14.223413] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:18.603 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:18.603 "name": "Existed_Raid", 00:09:18.603 "aliases": [ 00:09:18.603 "9cc3caa9-42cf-11ef-96ac-773515fba644" 00:09:18.603 ], 00:09:18.603 "product_name": "Raid Volume", 00:09:18.603 "block_size": 512, 00:09:18.603 "num_blocks": 196608, 00:09:18.603 "uuid": "9cc3caa9-42cf-11ef-96ac-773515fba644", 00:09:18.603 "assigned_rate_limits": { 00:09:18.603 "rw_ios_per_sec": 0, 00:09:18.603 "rw_mbytes_per_sec": 0, 00:09:18.603 "r_mbytes_per_sec": 0, 00:09:18.603 "w_mbytes_per_sec": 0 00:09:18.603 }, 00:09:18.603 "claimed": false, 00:09:18.603 "zoned": false, 00:09:18.603 "supported_io_types": { 00:09:18.603 "read": true, 00:09:18.603 "write": true, 00:09:18.603 "unmap": true, 00:09:18.603 "flush": true, 00:09:18.603 "reset": true, 00:09:18.603 "nvme_admin": false, 00:09:18.603 "nvme_io": false, 00:09:18.603 "nvme_io_md": false, 00:09:18.603 "write_zeroes": true, 00:09:18.603 "zcopy": false, 00:09:18.603 "get_zone_info": false, 00:09:18.603 "zone_management": false, 00:09:18.603 "zone_append": false, 00:09:18.603 "compare": false, 00:09:18.603 "compare_and_write": false, 00:09:18.603 "abort": false, 00:09:18.603 "seek_hole": false, 00:09:18.603 "seek_data": false, 00:09:18.603 "copy": false, 00:09:18.603 "nvme_iov_md": false 00:09:18.603 }, 00:09:18.603 "memory_domains": [ 00:09:18.603 { 00:09:18.603 "dma_device_id": "system", 00:09:18.603 "dma_device_type": 1 00:09:18.603 }, 00:09:18.603 { 00:09:18.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.603 "dma_device_type": 2 00:09:18.603 }, 00:09:18.603 { 00:09:18.603 "dma_device_id": "system", 00:09:18.603 "dma_device_type": 1 00:09:18.603 }, 00:09:18.603 { 00:09:18.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.603 "dma_device_type": 2 00:09:18.603 }, 00:09:18.603 { 00:09:18.603 "dma_device_id": "system", 00:09:18.603 "dma_device_type": 1 00:09:18.603 }, 00:09:18.603 { 00:09:18.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.603 "dma_device_type": 2 00:09:18.603 } 00:09:18.603 ], 00:09:18.603 "driver_specific": { 00:09:18.603 "raid": { 00:09:18.603 "uuid": "9cc3caa9-42cf-11ef-96ac-773515fba644", 00:09:18.603 "strip_size_kb": 64, 00:09:18.603 "state": "online", 00:09:18.603 "raid_level": "raid0", 00:09:18.603 "superblock": false, 00:09:18.603 "num_base_bdevs": 3, 00:09:18.603 "num_base_bdevs_discovered": 3, 00:09:18.603 "num_base_bdevs_operational": 3, 00:09:18.603 "base_bdevs_list": [ 00:09:18.603 { 00:09:18.603 "name": "NewBaseBdev", 00:09:18.603 "uuid": "98f37f85-42cf-11ef-96ac-773515fba644", 00:09:18.603 "is_configured": true, 00:09:18.603 "data_offset": 0, 00:09:18.603 "data_size": 65536 00:09:18.603 }, 00:09:18.603 { 00:09:18.603 "name": "BaseBdev2", 00:09:18.603 "uuid": "96cffe21-42cf-11ef-96ac-773515fba644", 00:09:18.603 "is_configured": true, 00:09:18.603 "data_offset": 0, 00:09:18.603 "data_size": 65536 00:09:18.603 }, 00:09:18.603 { 00:09:18.603 "name": "BaseBdev3", 00:09:18.603 "uuid": "9755aa9e-42cf-11ef-96ac-773515fba644", 00:09:18.603 "is_configured": true, 00:09:18.603 "data_offset": 0, 00:09:18.603 "data_size": 65536 00:09:18.603 } 00:09:18.603 ] 00:09:18.603 } 00:09:18.603 } 00:09:18.603 }' 00:09:18.603 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:18.603 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:09:18.603 BaseBdev2 00:09:18.603 BaseBdev3' 00:09:18.603 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:18.603 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:09:18.603 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:18.862 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:18.862 "name": "NewBaseBdev", 00:09:18.862 "aliases": [ 00:09:18.862 "98f37f85-42cf-11ef-96ac-773515fba644" 00:09:18.862 ], 00:09:18.862 "product_name": "Malloc disk", 00:09:18.862 "block_size": 512, 00:09:18.862 "num_blocks": 65536, 00:09:18.862 "uuid": "98f37f85-42cf-11ef-96ac-773515fba644", 00:09:18.862 "assigned_rate_limits": { 00:09:18.862 "rw_ios_per_sec": 0, 00:09:18.862 "rw_mbytes_per_sec": 0, 00:09:18.862 "r_mbytes_per_sec": 0, 00:09:18.862 "w_mbytes_per_sec": 0 00:09:18.862 }, 00:09:18.862 "claimed": true, 00:09:18.862 "claim_type": "exclusive_write", 00:09:18.862 "zoned": false, 00:09:18.862 "supported_io_types": { 00:09:18.862 "read": true, 00:09:18.862 "write": true, 00:09:18.862 "unmap": true, 00:09:18.862 "flush": true, 00:09:18.862 "reset": true, 00:09:18.862 "nvme_admin": false, 00:09:18.862 "nvme_io": false, 00:09:18.862 "nvme_io_md": false, 00:09:18.862 "write_zeroes": true, 00:09:18.862 "zcopy": true, 00:09:18.862 "get_zone_info": false, 00:09:18.862 "zone_management": false, 00:09:18.862 "zone_append": false, 00:09:18.862 "compare": false, 00:09:18.862 "compare_and_write": false, 00:09:18.862 "abort": true, 00:09:18.862 "seek_hole": false, 00:09:18.862 "seek_data": false, 00:09:18.862 "copy": true, 00:09:18.862 "nvme_iov_md": false 00:09:18.862 }, 00:09:18.862 "memory_domains": [ 00:09:18.862 { 00:09:18.862 "dma_device_id": "system", 00:09:18.862 "dma_device_type": 1 00:09:18.862 }, 00:09:18.862 { 00:09:18.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.862 "dma_device_type": 2 00:09:18.862 } 00:09:18.862 ], 00:09:18.862 "driver_specific": {} 00:09:18.862 }' 00:09:18.862 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:18.862 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:18.862 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:18.862 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:18.862 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:18.862 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:18.862 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:18.862 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:18.862 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:18.862 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:18.862 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:18.862 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:18.862 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:18.862 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:18.862 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:19.120 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:19.120 "name": "BaseBdev2", 00:09:19.120 "aliases": [ 00:09:19.120 "96cffe21-42cf-11ef-96ac-773515fba644" 00:09:19.120 ], 00:09:19.120 "product_name": "Malloc disk", 00:09:19.120 "block_size": 512, 00:09:19.120 "num_blocks": 65536, 00:09:19.120 "uuid": "96cffe21-42cf-11ef-96ac-773515fba644", 00:09:19.120 "assigned_rate_limits": { 00:09:19.120 "rw_ios_per_sec": 0, 00:09:19.120 "rw_mbytes_per_sec": 0, 00:09:19.120 "r_mbytes_per_sec": 0, 00:09:19.120 "w_mbytes_per_sec": 0 00:09:19.120 }, 00:09:19.120 "claimed": true, 00:09:19.120 "claim_type": "exclusive_write", 00:09:19.120 "zoned": false, 00:09:19.120 "supported_io_types": { 00:09:19.120 "read": true, 00:09:19.120 "write": true, 00:09:19.120 "unmap": true, 00:09:19.120 "flush": true, 00:09:19.120 "reset": true, 00:09:19.120 "nvme_admin": false, 00:09:19.120 "nvme_io": false, 00:09:19.120 "nvme_io_md": false, 00:09:19.120 "write_zeroes": true, 00:09:19.120 "zcopy": true, 00:09:19.120 "get_zone_info": false, 00:09:19.120 "zone_management": false, 00:09:19.120 "zone_append": false, 00:09:19.120 "compare": false, 00:09:19.120 "compare_and_write": false, 00:09:19.120 "abort": true, 00:09:19.120 "seek_hole": false, 00:09:19.120 "seek_data": false, 00:09:19.120 "copy": true, 00:09:19.120 "nvme_iov_md": false 00:09:19.120 }, 00:09:19.120 "memory_domains": [ 00:09:19.120 { 00:09:19.120 "dma_device_id": "system", 00:09:19.120 "dma_device_type": 1 00:09:19.120 }, 00:09:19.120 { 00:09:19.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.120 "dma_device_type": 2 00:09:19.120 } 00:09:19.120 ], 00:09:19.120 "driver_specific": {} 00:09:19.120 }' 00:09:19.121 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:19.121 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:19.121 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:19.121 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:19.121 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:19.121 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:19.121 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:19.121 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:19.121 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:19.121 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:19.121 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:19.121 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:19.121 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:19.121 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:09:19.121 17:28:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:19.379 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:19.379 "name": "BaseBdev3", 00:09:19.379 "aliases": [ 00:09:19.379 "9755aa9e-42cf-11ef-96ac-773515fba644" 00:09:19.379 ], 00:09:19.379 "product_name": "Malloc disk", 00:09:19.379 "block_size": 512, 00:09:19.379 "num_blocks": 65536, 00:09:19.379 "uuid": "9755aa9e-42cf-11ef-96ac-773515fba644", 00:09:19.379 "assigned_rate_limits": { 00:09:19.379 "rw_ios_per_sec": 0, 00:09:19.379 "rw_mbytes_per_sec": 0, 00:09:19.379 "r_mbytes_per_sec": 0, 00:09:19.379 "w_mbytes_per_sec": 0 00:09:19.379 }, 00:09:19.379 "claimed": true, 00:09:19.379 "claim_type": "exclusive_write", 00:09:19.379 "zoned": false, 00:09:19.379 "supported_io_types": { 00:09:19.379 "read": true, 00:09:19.379 "write": true, 00:09:19.379 "unmap": true, 00:09:19.379 "flush": true, 00:09:19.379 "reset": true, 00:09:19.379 "nvme_admin": false, 00:09:19.379 "nvme_io": false, 00:09:19.379 "nvme_io_md": false, 00:09:19.379 "write_zeroes": true, 00:09:19.379 "zcopy": true, 00:09:19.379 "get_zone_info": false, 00:09:19.379 "zone_management": false, 00:09:19.379 "zone_append": false, 00:09:19.379 "compare": false, 00:09:19.379 "compare_and_write": false, 00:09:19.379 "abort": true, 00:09:19.379 "seek_hole": false, 00:09:19.379 "seek_data": false, 00:09:19.379 "copy": true, 00:09:19.379 "nvme_iov_md": false 00:09:19.379 }, 00:09:19.379 "memory_domains": [ 00:09:19.379 { 00:09:19.379 "dma_device_id": "system", 00:09:19.379 "dma_device_type": 1 00:09:19.379 }, 00:09:19.379 { 00:09:19.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.379 "dma_device_type": 2 00:09:19.379 } 00:09:19.379 ], 00:09:19.379 "driver_specific": {} 00:09:19.379 }' 00:09:19.379 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:19.379 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:19.379 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:19.379 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:19.379 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:19.379 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:19.379 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:19.638 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:19.638 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:19.638 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:19.638 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:19.638 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:19.638 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:19.638 [2024-07-15 17:28:15.459388] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:19.638 [2024-07-15 17:28:15.459413] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:19.638 [2024-07-15 17:28:15.459436] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:19.638 [2024-07-15 17:28:15.459450] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:19.638 [2024-07-15 17:28:15.459454] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2a8b3d634a00 name Existed_Raid, state offline 00:09:19.896 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 51948 00:09:19.896 17:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 51948 ']' 00:09:19.896 17:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 51948 00:09:19.896 17:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:09:19.896 17:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:09:19.896 17:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 51948 00:09:19.896 17:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:09:19.896 17:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:09:19.896 17:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:09:19.896 killing process with pid 51948 00:09:19.896 17:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51948' 00:09:19.896 17:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 51948 00:09:19.896 [2024-07-15 17:28:15.486271] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:19.896 17:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 51948 00:09:19.896 [2024-07-15 17:28:15.503500] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:19.896 17:28:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:09:19.896 00:09:19.896 real 0m24.175s 00:09:19.896 user 0m44.297s 00:09:19.896 sys 0m3.209s 00:09:19.896 17:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:19.897 ************************************ 00:09:19.897 END TEST raid_state_function_test 00:09:19.897 ************************************ 00:09:19.897 17:28:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.897 17:28:15 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:19.897 17:28:15 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:09:19.897 17:28:15 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:19.897 17:28:15 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:19.897 17:28:15 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:20.155 ************************************ 00:09:20.155 START TEST raid_state_function_test_sb 00:09:20.155 ************************************ 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 true 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=52677 00:09:20.155 Process raid pid: 52677 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 52677' 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 52677 /var/tmp/spdk-raid.sock 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 52677 ']' 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:20.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:20.155 17:28:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.155 [2024-07-15 17:28:15.740878] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:09:20.155 [2024-07-15 17:28:15.741090] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:09:20.721 EAL: TSC is not safe to use in SMP mode 00:09:20.721 EAL: TSC is not invariant 00:09:20.721 [2024-07-15 17:28:16.271889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:20.721 [2024-07-15 17:28:16.360585] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:20.721 [2024-07-15 17:28:16.362618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:20.721 [2024-07-15 17:28:16.363370] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:20.721 [2024-07-15 17:28:16.363383] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:20.978 17:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:20.978 17:28:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:09:20.978 17:28:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:21.235 [2024-07-15 17:28:17.007519] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:21.236 [2024-07-15 17:28:17.007570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:21.236 [2024-07-15 17:28:17.007575] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:21.236 [2024-07-15 17:28:17.007584] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:21.236 [2024-07-15 17:28:17.007587] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:21.236 [2024-07-15 17:28:17.007594] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:21.236 17:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:21.236 17:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:21.236 17:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:21.236 17:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:21.236 17:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:21.236 17:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:21.236 17:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:21.236 17:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:21.236 17:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:21.236 17:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:21.236 17:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:21.236 17:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.500 17:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:21.500 "name": "Existed_Raid", 00:09:21.500 "uuid": "9f4d33eb-42cf-11ef-96ac-773515fba644", 00:09:21.500 "strip_size_kb": 64, 00:09:21.500 "state": "configuring", 00:09:21.500 "raid_level": "raid0", 00:09:21.500 "superblock": true, 00:09:21.500 "num_base_bdevs": 3, 00:09:21.500 "num_base_bdevs_discovered": 0, 00:09:21.500 "num_base_bdevs_operational": 3, 00:09:21.500 "base_bdevs_list": [ 00:09:21.500 { 00:09:21.500 "name": "BaseBdev1", 00:09:21.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.500 "is_configured": false, 00:09:21.500 "data_offset": 0, 00:09:21.500 "data_size": 0 00:09:21.500 }, 00:09:21.500 { 00:09:21.500 "name": "BaseBdev2", 00:09:21.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.500 "is_configured": false, 00:09:21.500 "data_offset": 0, 00:09:21.500 "data_size": 0 00:09:21.500 }, 00:09:21.500 { 00:09:21.500 "name": "BaseBdev3", 00:09:21.500 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.500 "is_configured": false, 00:09:21.500 "data_offset": 0, 00:09:21.500 "data_size": 0 00:09:21.500 } 00:09:21.500 ] 00:09:21.500 }' 00:09:21.500 17:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:21.500 17:28:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.757 17:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:22.016 [2024-07-15 17:28:17.815508] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:22.016 [2024-07-15 17:28:17.815536] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18e569e34500 name Existed_Raid, state configuring 00:09:22.016 17:28:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:22.274 [2024-07-15 17:28:18.055519] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:22.274 [2024-07-15 17:28:18.055565] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:22.274 [2024-07-15 17:28:18.055571] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:22.274 [2024-07-15 17:28:18.055579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:22.274 [2024-07-15 17:28:18.055582] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:22.274 [2024-07-15 17:28:18.055590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:22.274 17:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:22.531 [2024-07-15 17:28:18.332527] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:22.531 BaseBdev1 00:09:22.531 17:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:09:22.531 17:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:09:22.531 17:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:22.531 17:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:22.531 17:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:22.531 17:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:22.531 17:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:22.789 17:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:23.047 [ 00:09:23.047 { 00:09:23.047 "name": "BaseBdev1", 00:09:23.047 "aliases": [ 00:09:23.047 "a0173b7f-42cf-11ef-96ac-773515fba644" 00:09:23.047 ], 00:09:23.047 "product_name": "Malloc disk", 00:09:23.047 "block_size": 512, 00:09:23.047 "num_blocks": 65536, 00:09:23.047 "uuid": "a0173b7f-42cf-11ef-96ac-773515fba644", 00:09:23.047 "assigned_rate_limits": { 00:09:23.047 "rw_ios_per_sec": 0, 00:09:23.047 "rw_mbytes_per_sec": 0, 00:09:23.047 "r_mbytes_per_sec": 0, 00:09:23.047 "w_mbytes_per_sec": 0 00:09:23.047 }, 00:09:23.047 "claimed": true, 00:09:23.047 "claim_type": "exclusive_write", 00:09:23.047 "zoned": false, 00:09:23.047 "supported_io_types": { 00:09:23.047 "read": true, 00:09:23.047 "write": true, 00:09:23.047 "unmap": true, 00:09:23.047 "flush": true, 00:09:23.047 "reset": true, 00:09:23.047 "nvme_admin": false, 00:09:23.047 "nvme_io": false, 00:09:23.047 "nvme_io_md": false, 00:09:23.047 "write_zeroes": true, 00:09:23.047 "zcopy": true, 00:09:23.047 "get_zone_info": false, 00:09:23.047 "zone_management": false, 00:09:23.047 "zone_append": false, 00:09:23.047 "compare": false, 00:09:23.047 "compare_and_write": false, 00:09:23.047 "abort": true, 00:09:23.047 "seek_hole": false, 00:09:23.047 "seek_data": false, 00:09:23.047 "copy": true, 00:09:23.047 "nvme_iov_md": false 00:09:23.047 }, 00:09:23.047 "memory_domains": [ 00:09:23.047 { 00:09:23.047 "dma_device_id": "system", 00:09:23.047 "dma_device_type": 1 00:09:23.047 }, 00:09:23.047 { 00:09:23.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.047 "dma_device_type": 2 00:09:23.047 } 00:09:23.047 ], 00:09:23.047 "driver_specific": {} 00:09:23.047 } 00:09:23.047 ] 00:09:23.047 17:28:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:23.047 17:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:23.047 17:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:23.047 17:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:23.047 17:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:23.047 17:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:23.047 17:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:23.047 17:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:23.047 17:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:23.047 17:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:23.047 17:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:23.047 17:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:23.047 17:28:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.613 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:23.613 "name": "Existed_Raid", 00:09:23.613 "uuid": "9fed1d7a-42cf-11ef-96ac-773515fba644", 00:09:23.614 "strip_size_kb": 64, 00:09:23.614 "state": "configuring", 00:09:23.614 "raid_level": "raid0", 00:09:23.614 "superblock": true, 00:09:23.614 "num_base_bdevs": 3, 00:09:23.614 "num_base_bdevs_discovered": 1, 00:09:23.614 "num_base_bdevs_operational": 3, 00:09:23.614 "base_bdevs_list": [ 00:09:23.614 { 00:09:23.614 "name": "BaseBdev1", 00:09:23.614 "uuid": "a0173b7f-42cf-11ef-96ac-773515fba644", 00:09:23.614 "is_configured": true, 00:09:23.614 "data_offset": 2048, 00:09:23.614 "data_size": 63488 00:09:23.614 }, 00:09:23.614 { 00:09:23.614 "name": "BaseBdev2", 00:09:23.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.614 "is_configured": false, 00:09:23.614 "data_offset": 0, 00:09:23.614 "data_size": 0 00:09:23.614 }, 00:09:23.614 { 00:09:23.614 "name": "BaseBdev3", 00:09:23.614 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:23.614 "is_configured": false, 00:09:23.614 "data_offset": 0, 00:09:23.614 "data_size": 0 00:09:23.614 } 00:09:23.614 ] 00:09:23.614 }' 00:09:23.614 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:23.614 17:28:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.874 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:24.137 [2024-07-15 17:28:19.739548] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:24.137 [2024-07-15 17:28:19.739581] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18e569e34500 name Existed_Raid, state configuring 00:09:24.137 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:24.396 [2024-07-15 17:28:19.979565] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:24.397 [2024-07-15 17:28:19.980354] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:24.397 [2024-07-15 17:28:19.980390] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:24.397 [2024-07-15 17:28:19.980396] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:24.397 [2024-07-15 17:28:19.980404] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:24.397 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:09:24.397 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:24.397 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:24.397 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:24.397 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:24.397 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:24.397 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:24.397 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:24.397 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:24.397 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:24.397 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:24.397 17:28:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:24.397 17:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:24.397 17:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.654 17:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:24.654 "name": "Existed_Raid", 00:09:24.654 "uuid": "a112b367-42cf-11ef-96ac-773515fba644", 00:09:24.654 "strip_size_kb": 64, 00:09:24.654 "state": "configuring", 00:09:24.654 "raid_level": "raid0", 00:09:24.654 "superblock": true, 00:09:24.654 "num_base_bdevs": 3, 00:09:24.654 "num_base_bdevs_discovered": 1, 00:09:24.655 "num_base_bdevs_operational": 3, 00:09:24.655 "base_bdevs_list": [ 00:09:24.655 { 00:09:24.655 "name": "BaseBdev1", 00:09:24.655 "uuid": "a0173b7f-42cf-11ef-96ac-773515fba644", 00:09:24.655 "is_configured": true, 00:09:24.655 "data_offset": 2048, 00:09:24.655 "data_size": 63488 00:09:24.655 }, 00:09:24.655 { 00:09:24.655 "name": "BaseBdev2", 00:09:24.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.655 "is_configured": false, 00:09:24.655 "data_offset": 0, 00:09:24.655 "data_size": 0 00:09:24.655 }, 00:09:24.655 { 00:09:24.655 "name": "BaseBdev3", 00:09:24.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:24.655 "is_configured": false, 00:09:24.655 "data_offset": 0, 00:09:24.655 "data_size": 0 00:09:24.655 } 00:09:24.655 ] 00:09:24.655 }' 00:09:24.655 17:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:24.655 17:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.912 17:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:25.170 [2024-07-15 17:28:20.779713] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:25.170 BaseBdev2 00:09:25.170 17:28:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:09:25.170 17:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:09:25.170 17:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:25.170 17:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:25.170 17:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:25.170 17:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:25.170 17:28:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:25.428 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:25.686 [ 00:09:25.686 { 00:09:25.686 "name": "BaseBdev2", 00:09:25.686 "aliases": [ 00:09:25.686 "a18cc651-42cf-11ef-96ac-773515fba644" 00:09:25.686 ], 00:09:25.686 "product_name": "Malloc disk", 00:09:25.686 "block_size": 512, 00:09:25.686 "num_blocks": 65536, 00:09:25.686 "uuid": "a18cc651-42cf-11ef-96ac-773515fba644", 00:09:25.686 "assigned_rate_limits": { 00:09:25.686 "rw_ios_per_sec": 0, 00:09:25.686 "rw_mbytes_per_sec": 0, 00:09:25.686 "r_mbytes_per_sec": 0, 00:09:25.686 "w_mbytes_per_sec": 0 00:09:25.686 }, 00:09:25.686 "claimed": true, 00:09:25.686 "claim_type": "exclusive_write", 00:09:25.686 "zoned": false, 00:09:25.686 "supported_io_types": { 00:09:25.686 "read": true, 00:09:25.686 "write": true, 00:09:25.686 "unmap": true, 00:09:25.686 "flush": true, 00:09:25.686 "reset": true, 00:09:25.686 "nvme_admin": false, 00:09:25.686 "nvme_io": false, 00:09:25.686 "nvme_io_md": false, 00:09:25.686 "write_zeroes": true, 00:09:25.686 "zcopy": true, 00:09:25.686 "get_zone_info": false, 00:09:25.686 "zone_management": false, 00:09:25.686 "zone_append": false, 00:09:25.686 "compare": false, 00:09:25.686 "compare_and_write": false, 00:09:25.686 "abort": true, 00:09:25.686 "seek_hole": false, 00:09:25.686 "seek_data": false, 00:09:25.686 "copy": true, 00:09:25.686 "nvme_iov_md": false 00:09:25.686 }, 00:09:25.686 "memory_domains": [ 00:09:25.686 { 00:09:25.686 "dma_device_id": "system", 00:09:25.686 "dma_device_type": 1 00:09:25.686 }, 00:09:25.686 { 00:09:25.686 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.686 "dma_device_type": 2 00:09:25.686 } 00:09:25.687 ], 00:09:25.687 "driver_specific": {} 00:09:25.687 } 00:09:25.687 ] 00:09:25.687 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:25.687 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:25.687 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:25.687 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:25.687 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:25.687 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:25.687 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:25.687 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:25.687 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:25.687 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:25.687 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:25.687 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:25.687 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:25.687 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:25.687 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.945 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:25.945 "name": "Existed_Raid", 00:09:25.945 "uuid": "a112b367-42cf-11ef-96ac-773515fba644", 00:09:25.945 "strip_size_kb": 64, 00:09:25.945 "state": "configuring", 00:09:25.945 "raid_level": "raid0", 00:09:25.945 "superblock": true, 00:09:25.945 "num_base_bdevs": 3, 00:09:25.945 "num_base_bdevs_discovered": 2, 00:09:25.945 "num_base_bdevs_operational": 3, 00:09:25.945 "base_bdevs_list": [ 00:09:25.945 { 00:09:25.945 "name": "BaseBdev1", 00:09:25.945 "uuid": "a0173b7f-42cf-11ef-96ac-773515fba644", 00:09:25.945 "is_configured": true, 00:09:25.945 "data_offset": 2048, 00:09:25.945 "data_size": 63488 00:09:25.945 }, 00:09:25.945 { 00:09:25.945 "name": "BaseBdev2", 00:09:25.945 "uuid": "a18cc651-42cf-11ef-96ac-773515fba644", 00:09:25.945 "is_configured": true, 00:09:25.945 "data_offset": 2048, 00:09:25.945 "data_size": 63488 00:09:25.945 }, 00:09:25.945 { 00:09:25.945 "name": "BaseBdev3", 00:09:25.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.945 "is_configured": false, 00:09:25.945 "data_offset": 0, 00:09:25.945 "data_size": 0 00:09:25.945 } 00:09:25.945 ] 00:09:25.945 }' 00:09:25.945 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:25.945 17:28:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:26.204 17:28:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:26.204 [2024-07-15 17:28:22.031725] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:26.204 [2024-07-15 17:28:22.031794] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x18e569e34a00 00:09:26.204 [2024-07-15 17:28:22.031800] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:26.204 [2024-07-15 17:28:22.031821] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18e569e97e20 00:09:26.204 [2024-07-15 17:28:22.031873] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x18e569e34a00 00:09:26.204 [2024-07-15 17:28:22.031878] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x18e569e34a00 00:09:26.204 [2024-07-15 17:28:22.031903] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.462 BaseBdev3 00:09:26.462 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:09:26.462 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:09:26.462 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:26.462 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:26.462 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:26.462 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:26.462 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:26.721 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:26.721 [ 00:09:26.721 { 00:09:26.721 "name": "BaseBdev3", 00:09:26.721 "aliases": [ 00:09:26.721 "a24bd1a4-42cf-11ef-96ac-773515fba644" 00:09:26.721 ], 00:09:26.721 "product_name": "Malloc disk", 00:09:26.721 "block_size": 512, 00:09:26.721 "num_blocks": 65536, 00:09:26.721 "uuid": "a24bd1a4-42cf-11ef-96ac-773515fba644", 00:09:26.721 "assigned_rate_limits": { 00:09:26.721 "rw_ios_per_sec": 0, 00:09:26.721 "rw_mbytes_per_sec": 0, 00:09:26.721 "r_mbytes_per_sec": 0, 00:09:26.721 "w_mbytes_per_sec": 0 00:09:26.721 }, 00:09:26.721 "claimed": true, 00:09:26.721 "claim_type": "exclusive_write", 00:09:26.721 "zoned": false, 00:09:26.721 "supported_io_types": { 00:09:26.721 "read": true, 00:09:26.721 "write": true, 00:09:26.721 "unmap": true, 00:09:26.721 "flush": true, 00:09:26.721 "reset": true, 00:09:26.721 "nvme_admin": false, 00:09:26.721 "nvme_io": false, 00:09:26.721 "nvme_io_md": false, 00:09:26.721 "write_zeroes": true, 00:09:26.721 "zcopy": true, 00:09:26.721 "get_zone_info": false, 00:09:26.721 "zone_management": false, 00:09:26.721 "zone_append": false, 00:09:26.721 "compare": false, 00:09:26.721 "compare_and_write": false, 00:09:26.721 "abort": true, 00:09:26.721 "seek_hole": false, 00:09:26.721 "seek_data": false, 00:09:26.721 "copy": true, 00:09:26.721 "nvme_iov_md": false 00:09:26.721 }, 00:09:26.721 "memory_domains": [ 00:09:26.721 { 00:09:26.721 "dma_device_id": "system", 00:09:26.721 "dma_device_type": 1 00:09:26.721 }, 00:09:26.721 { 00:09:26.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:26.721 "dma_device_type": 2 00:09:26.721 } 00:09:26.721 ], 00:09:26.721 "driver_specific": {} 00:09:26.721 } 00:09:26.721 ] 00:09:26.979 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:26.979 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:26.979 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:26.979 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:26.979 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:26.979 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:26.979 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:26.979 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:26.979 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:26.979 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:26.979 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:26.979 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:26.979 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:26.979 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:26.979 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.237 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:27.237 "name": "Existed_Raid", 00:09:27.237 "uuid": "a112b367-42cf-11ef-96ac-773515fba644", 00:09:27.237 "strip_size_kb": 64, 00:09:27.237 "state": "online", 00:09:27.237 "raid_level": "raid0", 00:09:27.237 "superblock": true, 00:09:27.237 "num_base_bdevs": 3, 00:09:27.237 "num_base_bdevs_discovered": 3, 00:09:27.237 "num_base_bdevs_operational": 3, 00:09:27.237 "base_bdevs_list": [ 00:09:27.237 { 00:09:27.237 "name": "BaseBdev1", 00:09:27.237 "uuid": "a0173b7f-42cf-11ef-96ac-773515fba644", 00:09:27.237 "is_configured": true, 00:09:27.237 "data_offset": 2048, 00:09:27.237 "data_size": 63488 00:09:27.237 }, 00:09:27.237 { 00:09:27.237 "name": "BaseBdev2", 00:09:27.238 "uuid": "a18cc651-42cf-11ef-96ac-773515fba644", 00:09:27.238 "is_configured": true, 00:09:27.238 "data_offset": 2048, 00:09:27.238 "data_size": 63488 00:09:27.238 }, 00:09:27.238 { 00:09:27.238 "name": "BaseBdev3", 00:09:27.238 "uuid": "a24bd1a4-42cf-11ef-96ac-773515fba644", 00:09:27.238 "is_configured": true, 00:09:27.238 "data_offset": 2048, 00:09:27.238 "data_size": 63488 00:09:27.238 } 00:09:27.238 ] 00:09:27.238 }' 00:09:27.238 17:28:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:27.238 17:28:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.496 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:09:27.496 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:27.496 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:27.496 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:27.496 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:27.496 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:09:27.496 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:27.496 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:27.754 [2024-07-15 17:28:23.491660] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:27.754 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:27.754 "name": "Existed_Raid", 00:09:27.754 "aliases": [ 00:09:27.754 "a112b367-42cf-11ef-96ac-773515fba644" 00:09:27.754 ], 00:09:27.754 "product_name": "Raid Volume", 00:09:27.754 "block_size": 512, 00:09:27.754 "num_blocks": 190464, 00:09:27.754 "uuid": "a112b367-42cf-11ef-96ac-773515fba644", 00:09:27.754 "assigned_rate_limits": { 00:09:27.754 "rw_ios_per_sec": 0, 00:09:27.754 "rw_mbytes_per_sec": 0, 00:09:27.754 "r_mbytes_per_sec": 0, 00:09:27.754 "w_mbytes_per_sec": 0 00:09:27.754 }, 00:09:27.754 "claimed": false, 00:09:27.754 "zoned": false, 00:09:27.754 "supported_io_types": { 00:09:27.754 "read": true, 00:09:27.754 "write": true, 00:09:27.754 "unmap": true, 00:09:27.754 "flush": true, 00:09:27.754 "reset": true, 00:09:27.754 "nvme_admin": false, 00:09:27.754 "nvme_io": false, 00:09:27.754 "nvme_io_md": false, 00:09:27.754 "write_zeroes": true, 00:09:27.754 "zcopy": false, 00:09:27.754 "get_zone_info": false, 00:09:27.754 "zone_management": false, 00:09:27.754 "zone_append": false, 00:09:27.754 "compare": false, 00:09:27.754 "compare_and_write": false, 00:09:27.754 "abort": false, 00:09:27.754 "seek_hole": false, 00:09:27.754 "seek_data": false, 00:09:27.754 "copy": false, 00:09:27.754 "nvme_iov_md": false 00:09:27.754 }, 00:09:27.754 "memory_domains": [ 00:09:27.754 { 00:09:27.754 "dma_device_id": "system", 00:09:27.754 "dma_device_type": 1 00:09:27.754 }, 00:09:27.754 { 00:09:27.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.754 "dma_device_type": 2 00:09:27.754 }, 00:09:27.754 { 00:09:27.754 "dma_device_id": "system", 00:09:27.754 "dma_device_type": 1 00:09:27.754 }, 00:09:27.754 { 00:09:27.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.754 "dma_device_type": 2 00:09:27.754 }, 00:09:27.754 { 00:09:27.754 "dma_device_id": "system", 00:09:27.754 "dma_device_type": 1 00:09:27.754 }, 00:09:27.754 { 00:09:27.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.754 "dma_device_type": 2 00:09:27.754 } 00:09:27.754 ], 00:09:27.754 "driver_specific": { 00:09:27.754 "raid": { 00:09:27.754 "uuid": "a112b367-42cf-11ef-96ac-773515fba644", 00:09:27.754 "strip_size_kb": 64, 00:09:27.754 "state": "online", 00:09:27.754 "raid_level": "raid0", 00:09:27.754 "superblock": true, 00:09:27.754 "num_base_bdevs": 3, 00:09:27.754 "num_base_bdevs_discovered": 3, 00:09:27.754 "num_base_bdevs_operational": 3, 00:09:27.754 "base_bdevs_list": [ 00:09:27.754 { 00:09:27.754 "name": "BaseBdev1", 00:09:27.754 "uuid": "a0173b7f-42cf-11ef-96ac-773515fba644", 00:09:27.754 "is_configured": true, 00:09:27.754 "data_offset": 2048, 00:09:27.754 "data_size": 63488 00:09:27.754 }, 00:09:27.754 { 00:09:27.754 "name": "BaseBdev2", 00:09:27.754 "uuid": "a18cc651-42cf-11ef-96ac-773515fba644", 00:09:27.754 "is_configured": true, 00:09:27.754 "data_offset": 2048, 00:09:27.754 "data_size": 63488 00:09:27.754 }, 00:09:27.754 { 00:09:27.754 "name": "BaseBdev3", 00:09:27.754 "uuid": "a24bd1a4-42cf-11ef-96ac-773515fba644", 00:09:27.754 "is_configured": true, 00:09:27.754 "data_offset": 2048, 00:09:27.754 "data_size": 63488 00:09:27.754 } 00:09:27.754 ] 00:09:27.754 } 00:09:27.754 } 00:09:27.754 }' 00:09:27.754 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:27.754 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:09:27.754 BaseBdev2 00:09:27.754 BaseBdev3' 00:09:27.754 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:27.754 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:27.754 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:28.013 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:28.013 "name": "BaseBdev1", 00:09:28.013 "aliases": [ 00:09:28.013 "a0173b7f-42cf-11ef-96ac-773515fba644" 00:09:28.013 ], 00:09:28.013 "product_name": "Malloc disk", 00:09:28.013 "block_size": 512, 00:09:28.013 "num_blocks": 65536, 00:09:28.013 "uuid": "a0173b7f-42cf-11ef-96ac-773515fba644", 00:09:28.013 "assigned_rate_limits": { 00:09:28.013 "rw_ios_per_sec": 0, 00:09:28.013 "rw_mbytes_per_sec": 0, 00:09:28.013 "r_mbytes_per_sec": 0, 00:09:28.013 "w_mbytes_per_sec": 0 00:09:28.013 }, 00:09:28.013 "claimed": true, 00:09:28.013 "claim_type": "exclusive_write", 00:09:28.013 "zoned": false, 00:09:28.013 "supported_io_types": { 00:09:28.013 "read": true, 00:09:28.013 "write": true, 00:09:28.013 "unmap": true, 00:09:28.013 "flush": true, 00:09:28.013 "reset": true, 00:09:28.013 "nvme_admin": false, 00:09:28.013 "nvme_io": false, 00:09:28.013 "nvme_io_md": false, 00:09:28.013 "write_zeroes": true, 00:09:28.013 "zcopy": true, 00:09:28.013 "get_zone_info": false, 00:09:28.013 "zone_management": false, 00:09:28.013 "zone_append": false, 00:09:28.013 "compare": false, 00:09:28.013 "compare_and_write": false, 00:09:28.013 "abort": true, 00:09:28.013 "seek_hole": false, 00:09:28.013 "seek_data": false, 00:09:28.013 "copy": true, 00:09:28.013 "nvme_iov_md": false 00:09:28.013 }, 00:09:28.013 "memory_domains": [ 00:09:28.013 { 00:09:28.013 "dma_device_id": "system", 00:09:28.013 "dma_device_type": 1 00:09:28.013 }, 00:09:28.013 { 00:09:28.013 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.013 "dma_device_type": 2 00:09:28.013 } 00:09:28.013 ], 00:09:28.013 "driver_specific": {} 00:09:28.013 }' 00:09:28.013 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:28.013 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:28.013 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:28.013 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:28.013 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:28.013 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:28.013 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:28.013 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:28.013 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:28.013 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:28.013 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:28.013 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:28.013 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:28.013 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:28.271 17:28:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:28.530 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:28.530 "name": "BaseBdev2", 00:09:28.530 "aliases": [ 00:09:28.530 "a18cc651-42cf-11ef-96ac-773515fba644" 00:09:28.530 ], 00:09:28.530 "product_name": "Malloc disk", 00:09:28.530 "block_size": 512, 00:09:28.530 "num_blocks": 65536, 00:09:28.530 "uuid": "a18cc651-42cf-11ef-96ac-773515fba644", 00:09:28.530 "assigned_rate_limits": { 00:09:28.530 "rw_ios_per_sec": 0, 00:09:28.530 "rw_mbytes_per_sec": 0, 00:09:28.530 "r_mbytes_per_sec": 0, 00:09:28.530 "w_mbytes_per_sec": 0 00:09:28.530 }, 00:09:28.530 "claimed": true, 00:09:28.530 "claim_type": "exclusive_write", 00:09:28.530 "zoned": false, 00:09:28.530 "supported_io_types": { 00:09:28.530 "read": true, 00:09:28.530 "write": true, 00:09:28.530 "unmap": true, 00:09:28.530 "flush": true, 00:09:28.530 "reset": true, 00:09:28.530 "nvme_admin": false, 00:09:28.530 "nvme_io": false, 00:09:28.530 "nvme_io_md": false, 00:09:28.530 "write_zeroes": true, 00:09:28.530 "zcopy": true, 00:09:28.530 "get_zone_info": false, 00:09:28.530 "zone_management": false, 00:09:28.530 "zone_append": false, 00:09:28.530 "compare": false, 00:09:28.530 "compare_and_write": false, 00:09:28.530 "abort": true, 00:09:28.530 "seek_hole": false, 00:09:28.530 "seek_data": false, 00:09:28.530 "copy": true, 00:09:28.530 "nvme_iov_md": false 00:09:28.530 }, 00:09:28.530 "memory_domains": [ 00:09:28.530 { 00:09:28.530 "dma_device_id": "system", 00:09:28.530 "dma_device_type": 1 00:09:28.530 }, 00:09:28.530 { 00:09:28.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.530 "dma_device_type": 2 00:09:28.530 } 00:09:28.530 ], 00:09:28.530 "driver_specific": {} 00:09:28.530 }' 00:09:28.530 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:28.530 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:28.530 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:28.530 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:28.530 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:28.530 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:28.530 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:28.530 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:28.530 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:28.530 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:28.530 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:28.530 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:28.530 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:28.530 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:09:28.530 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:28.788 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:28.788 "name": "BaseBdev3", 00:09:28.788 "aliases": [ 00:09:28.788 "a24bd1a4-42cf-11ef-96ac-773515fba644" 00:09:28.788 ], 00:09:28.788 "product_name": "Malloc disk", 00:09:28.788 "block_size": 512, 00:09:28.788 "num_blocks": 65536, 00:09:28.788 "uuid": "a24bd1a4-42cf-11ef-96ac-773515fba644", 00:09:28.788 "assigned_rate_limits": { 00:09:28.788 "rw_ios_per_sec": 0, 00:09:28.788 "rw_mbytes_per_sec": 0, 00:09:28.788 "r_mbytes_per_sec": 0, 00:09:28.788 "w_mbytes_per_sec": 0 00:09:28.788 }, 00:09:28.788 "claimed": true, 00:09:28.788 "claim_type": "exclusive_write", 00:09:28.788 "zoned": false, 00:09:28.788 "supported_io_types": { 00:09:28.788 "read": true, 00:09:28.788 "write": true, 00:09:28.788 "unmap": true, 00:09:28.788 "flush": true, 00:09:28.788 "reset": true, 00:09:28.788 "nvme_admin": false, 00:09:28.788 "nvme_io": false, 00:09:28.788 "nvme_io_md": false, 00:09:28.788 "write_zeroes": true, 00:09:28.788 "zcopy": true, 00:09:28.788 "get_zone_info": false, 00:09:28.788 "zone_management": false, 00:09:28.788 "zone_append": false, 00:09:28.788 "compare": false, 00:09:28.788 "compare_and_write": false, 00:09:28.788 "abort": true, 00:09:28.788 "seek_hole": false, 00:09:28.788 "seek_data": false, 00:09:28.788 "copy": true, 00:09:28.788 "nvme_iov_md": false 00:09:28.788 }, 00:09:28.788 "memory_domains": [ 00:09:28.788 { 00:09:28.788 "dma_device_id": "system", 00:09:28.788 "dma_device_type": 1 00:09:28.788 }, 00:09:28.788 { 00:09:28.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:28.788 "dma_device_type": 2 00:09:28.788 } 00:09:28.788 ], 00:09:28.788 "driver_specific": {} 00:09:28.788 }' 00:09:28.788 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:28.788 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:28.788 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:28.788 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:28.788 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:28.788 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:28.788 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:28.788 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:28.788 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:28.788 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:28.788 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:28.788 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:28.788 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:29.046 [2024-07-15 17:28:24.783652] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:29.046 [2024-07-15 17:28:24.783680] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:29.046 [2024-07-15 17:28:24.783695] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:29.046 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:09:29.046 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:09:29.046 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:29.046 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:09:29.046 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:09:29.046 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:29.046 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:29.046 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:09:29.046 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:29.046 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:29.046 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:29.046 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:29.046 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:29.046 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:29.046 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:29.046 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:29.046 17:28:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.305 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:29.305 "name": "Existed_Raid", 00:09:29.305 "uuid": "a112b367-42cf-11ef-96ac-773515fba644", 00:09:29.305 "strip_size_kb": 64, 00:09:29.305 "state": "offline", 00:09:29.305 "raid_level": "raid0", 00:09:29.305 "superblock": true, 00:09:29.305 "num_base_bdevs": 3, 00:09:29.305 "num_base_bdevs_discovered": 2, 00:09:29.305 "num_base_bdevs_operational": 2, 00:09:29.305 "base_bdevs_list": [ 00:09:29.305 { 00:09:29.305 "name": null, 00:09:29.305 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.305 "is_configured": false, 00:09:29.305 "data_offset": 2048, 00:09:29.305 "data_size": 63488 00:09:29.305 }, 00:09:29.305 { 00:09:29.305 "name": "BaseBdev2", 00:09:29.305 "uuid": "a18cc651-42cf-11ef-96ac-773515fba644", 00:09:29.305 "is_configured": true, 00:09:29.305 "data_offset": 2048, 00:09:29.305 "data_size": 63488 00:09:29.305 }, 00:09:29.305 { 00:09:29.305 "name": "BaseBdev3", 00:09:29.305 "uuid": "a24bd1a4-42cf-11ef-96ac-773515fba644", 00:09:29.305 "is_configured": true, 00:09:29.305 "data_offset": 2048, 00:09:29.305 "data_size": 63488 00:09:29.305 } 00:09:29.305 ] 00:09:29.305 }' 00:09:29.305 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:29.305 17:28:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:29.873 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:09:29.873 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:29.873 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:29.873 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:29.873 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:29.873 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:29.873 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:30.132 [2024-07-15 17:28:25.901523] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:30.132 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:30.132 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:30.132 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:30.132 17:28:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:30.390 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:30.390 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:30.390 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:09:30.648 [2024-07-15 17:28:26.431267] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:30.648 [2024-07-15 17:28:26.431300] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18e569e34a00 name Existed_Raid, state offline 00:09:30.648 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:30.648 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:30.648 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:30.648 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:09:30.906 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:09:30.906 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:09:30.906 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:09:30.906 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:09:30.906 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:30.906 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:31.164 BaseBdev2 00:09:31.164 17:28:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:09:31.164 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:09:31.164 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:31.164 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:31.164 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:31.164 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:31.164 17:28:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:31.421 17:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:31.680 [ 00:09:31.680 { 00:09:31.680 "name": "BaseBdev2", 00:09:31.680 "aliases": [ 00:09:31.680 "a538f040-42cf-11ef-96ac-773515fba644" 00:09:31.680 ], 00:09:31.680 "product_name": "Malloc disk", 00:09:31.680 "block_size": 512, 00:09:31.680 "num_blocks": 65536, 00:09:31.680 "uuid": "a538f040-42cf-11ef-96ac-773515fba644", 00:09:31.680 "assigned_rate_limits": { 00:09:31.680 "rw_ios_per_sec": 0, 00:09:31.680 "rw_mbytes_per_sec": 0, 00:09:31.680 "r_mbytes_per_sec": 0, 00:09:31.680 "w_mbytes_per_sec": 0 00:09:31.680 }, 00:09:31.680 "claimed": false, 00:09:31.680 "zoned": false, 00:09:31.680 "supported_io_types": { 00:09:31.680 "read": true, 00:09:31.680 "write": true, 00:09:31.680 "unmap": true, 00:09:31.680 "flush": true, 00:09:31.680 "reset": true, 00:09:31.680 "nvme_admin": false, 00:09:31.680 "nvme_io": false, 00:09:31.680 "nvme_io_md": false, 00:09:31.680 "write_zeroes": true, 00:09:31.680 "zcopy": true, 00:09:31.680 "get_zone_info": false, 00:09:31.680 "zone_management": false, 00:09:31.680 "zone_append": false, 00:09:31.680 "compare": false, 00:09:31.680 "compare_and_write": false, 00:09:31.680 "abort": true, 00:09:31.680 "seek_hole": false, 00:09:31.680 "seek_data": false, 00:09:31.680 "copy": true, 00:09:31.680 "nvme_iov_md": false 00:09:31.680 }, 00:09:31.680 "memory_domains": [ 00:09:31.680 { 00:09:31.680 "dma_device_id": "system", 00:09:31.680 "dma_device_type": 1 00:09:31.680 }, 00:09:31.680 { 00:09:31.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.680 "dma_device_type": 2 00:09:31.680 } 00:09:31.680 ], 00:09:31.680 "driver_specific": {} 00:09:31.680 } 00:09:31.680 ] 00:09:31.680 17:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:31.680 17:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:09:31.680 17:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:31.680 17:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:31.938 BaseBdev3 00:09:31.938 17:28:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:09:31.938 17:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:09:31.938 17:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:31.938 17:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:31.938 17:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:31.938 17:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:31.938 17:28:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:32.504 17:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:32.504 [ 00:09:32.504 { 00:09:32.504 "name": "BaseBdev3", 00:09:32.504 "aliases": [ 00:09:32.504 "a5b1ca21-42cf-11ef-96ac-773515fba644" 00:09:32.504 ], 00:09:32.504 "product_name": "Malloc disk", 00:09:32.504 "block_size": 512, 00:09:32.504 "num_blocks": 65536, 00:09:32.504 "uuid": "a5b1ca21-42cf-11ef-96ac-773515fba644", 00:09:32.504 "assigned_rate_limits": { 00:09:32.504 "rw_ios_per_sec": 0, 00:09:32.504 "rw_mbytes_per_sec": 0, 00:09:32.504 "r_mbytes_per_sec": 0, 00:09:32.504 "w_mbytes_per_sec": 0 00:09:32.504 }, 00:09:32.504 "claimed": false, 00:09:32.504 "zoned": false, 00:09:32.504 "supported_io_types": { 00:09:32.504 "read": true, 00:09:32.504 "write": true, 00:09:32.504 "unmap": true, 00:09:32.504 "flush": true, 00:09:32.504 "reset": true, 00:09:32.504 "nvme_admin": false, 00:09:32.504 "nvme_io": false, 00:09:32.504 "nvme_io_md": false, 00:09:32.504 "write_zeroes": true, 00:09:32.504 "zcopy": true, 00:09:32.504 "get_zone_info": false, 00:09:32.504 "zone_management": false, 00:09:32.504 "zone_append": false, 00:09:32.504 "compare": false, 00:09:32.504 "compare_and_write": false, 00:09:32.504 "abort": true, 00:09:32.504 "seek_hole": false, 00:09:32.504 "seek_data": false, 00:09:32.504 "copy": true, 00:09:32.504 "nvme_iov_md": false 00:09:32.504 }, 00:09:32.504 "memory_domains": [ 00:09:32.504 { 00:09:32.504 "dma_device_id": "system", 00:09:32.504 "dma_device_type": 1 00:09:32.504 }, 00:09:32.504 { 00:09:32.504 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:32.504 "dma_device_type": 2 00:09:32.504 } 00:09:32.504 ], 00:09:32.504 "driver_specific": {} 00:09:32.504 } 00:09:32.504 ] 00:09:32.504 17:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:32.504 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:09:32.504 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:32.504 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:32.781 [2024-07-15 17:28:28.541048] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:32.781 [2024-07-15 17:28:28.541099] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:32.781 [2024-07-15 17:28:28.541108] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:32.781 [2024-07-15 17:28:28.541652] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:32.781 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:32.781 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:32.781 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:32.781 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:32.781 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:32.781 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:32.781 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:32.781 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:32.781 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:32.781 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:32.782 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:32.782 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.345 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:33.345 "name": "Existed_Raid", 00:09:33.345 "uuid": "a62d147f-42cf-11ef-96ac-773515fba644", 00:09:33.345 "strip_size_kb": 64, 00:09:33.345 "state": "configuring", 00:09:33.345 "raid_level": "raid0", 00:09:33.345 "superblock": true, 00:09:33.345 "num_base_bdevs": 3, 00:09:33.345 "num_base_bdevs_discovered": 2, 00:09:33.345 "num_base_bdevs_operational": 3, 00:09:33.345 "base_bdevs_list": [ 00:09:33.345 { 00:09:33.345 "name": "BaseBdev1", 00:09:33.345 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.345 "is_configured": false, 00:09:33.345 "data_offset": 0, 00:09:33.345 "data_size": 0 00:09:33.345 }, 00:09:33.345 { 00:09:33.345 "name": "BaseBdev2", 00:09:33.345 "uuid": "a538f040-42cf-11ef-96ac-773515fba644", 00:09:33.345 "is_configured": true, 00:09:33.345 "data_offset": 2048, 00:09:33.345 "data_size": 63488 00:09:33.345 }, 00:09:33.345 { 00:09:33.345 "name": "BaseBdev3", 00:09:33.345 "uuid": "a5b1ca21-42cf-11ef-96ac-773515fba644", 00:09:33.345 "is_configured": true, 00:09:33.345 "data_offset": 2048, 00:09:33.345 "data_size": 63488 00:09:33.345 } 00:09:33.345 ] 00:09:33.345 }' 00:09:33.345 17:28:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:33.345 17:28:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.345 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:09:33.602 [2024-07-15 17:28:29.413057] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:33.602 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:33.602 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:33.602 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:33.602 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:33.602 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:33.602 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:33.602 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:33.602 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:33.602 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:33.602 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:33.602 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:33.602 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.165 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:34.165 "name": "Existed_Raid", 00:09:34.165 "uuid": "a62d147f-42cf-11ef-96ac-773515fba644", 00:09:34.165 "strip_size_kb": 64, 00:09:34.165 "state": "configuring", 00:09:34.165 "raid_level": "raid0", 00:09:34.165 "superblock": true, 00:09:34.165 "num_base_bdevs": 3, 00:09:34.165 "num_base_bdevs_discovered": 1, 00:09:34.165 "num_base_bdevs_operational": 3, 00:09:34.165 "base_bdevs_list": [ 00:09:34.165 { 00:09:34.165 "name": "BaseBdev1", 00:09:34.165 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.165 "is_configured": false, 00:09:34.165 "data_offset": 0, 00:09:34.165 "data_size": 0 00:09:34.165 }, 00:09:34.165 { 00:09:34.165 "name": null, 00:09:34.165 "uuid": "a538f040-42cf-11ef-96ac-773515fba644", 00:09:34.165 "is_configured": false, 00:09:34.165 "data_offset": 2048, 00:09:34.165 "data_size": 63488 00:09:34.165 }, 00:09:34.165 { 00:09:34.165 "name": "BaseBdev3", 00:09:34.165 "uuid": "a5b1ca21-42cf-11ef-96ac-773515fba644", 00:09:34.165 "is_configured": true, 00:09:34.165 "data_offset": 2048, 00:09:34.165 "data_size": 63488 00:09:34.165 } 00:09:34.165 ] 00:09:34.165 }' 00:09:34.165 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:34.165 17:28:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.165 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:34.165 17:28:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:34.426 17:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:09:34.426 17:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:34.706 [2024-07-15 17:28:30.513202] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:34.706 BaseBdev1 00:09:34.965 17:28:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:09:34.965 17:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:09:34.965 17:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:34.965 17:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:34.965 17:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:34.965 17:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:34.965 17:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:35.223 17:28:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:35.480 [ 00:09:35.480 { 00:09:35.480 "name": "BaseBdev1", 00:09:35.480 "aliases": [ 00:09:35.480 "a759fd7a-42cf-11ef-96ac-773515fba644" 00:09:35.480 ], 00:09:35.480 "product_name": "Malloc disk", 00:09:35.480 "block_size": 512, 00:09:35.480 "num_blocks": 65536, 00:09:35.480 "uuid": "a759fd7a-42cf-11ef-96ac-773515fba644", 00:09:35.480 "assigned_rate_limits": { 00:09:35.480 "rw_ios_per_sec": 0, 00:09:35.480 "rw_mbytes_per_sec": 0, 00:09:35.480 "r_mbytes_per_sec": 0, 00:09:35.480 "w_mbytes_per_sec": 0 00:09:35.480 }, 00:09:35.480 "claimed": true, 00:09:35.480 "claim_type": "exclusive_write", 00:09:35.480 "zoned": false, 00:09:35.480 "supported_io_types": { 00:09:35.480 "read": true, 00:09:35.480 "write": true, 00:09:35.480 "unmap": true, 00:09:35.480 "flush": true, 00:09:35.480 "reset": true, 00:09:35.480 "nvme_admin": false, 00:09:35.480 "nvme_io": false, 00:09:35.480 "nvme_io_md": false, 00:09:35.480 "write_zeroes": true, 00:09:35.480 "zcopy": true, 00:09:35.480 "get_zone_info": false, 00:09:35.480 "zone_management": false, 00:09:35.480 "zone_append": false, 00:09:35.480 "compare": false, 00:09:35.480 "compare_and_write": false, 00:09:35.480 "abort": true, 00:09:35.480 "seek_hole": false, 00:09:35.480 "seek_data": false, 00:09:35.480 "copy": true, 00:09:35.480 "nvme_iov_md": false 00:09:35.480 }, 00:09:35.480 "memory_domains": [ 00:09:35.480 { 00:09:35.480 "dma_device_id": "system", 00:09:35.480 "dma_device_type": 1 00:09:35.480 }, 00:09:35.480 { 00:09:35.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.480 "dma_device_type": 2 00:09:35.480 } 00:09:35.480 ], 00:09:35.480 "driver_specific": {} 00:09:35.480 } 00:09:35.480 ] 00:09:35.480 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:35.480 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:35.480 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:35.480 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:35.480 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:35.480 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:35.480 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:35.480 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:35.480 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:35.480 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:35.480 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:35.480 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:35.481 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.737 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:35.737 "name": "Existed_Raid", 00:09:35.737 "uuid": "a62d147f-42cf-11ef-96ac-773515fba644", 00:09:35.737 "strip_size_kb": 64, 00:09:35.737 "state": "configuring", 00:09:35.737 "raid_level": "raid0", 00:09:35.737 "superblock": true, 00:09:35.737 "num_base_bdevs": 3, 00:09:35.737 "num_base_bdevs_discovered": 2, 00:09:35.737 "num_base_bdevs_operational": 3, 00:09:35.737 "base_bdevs_list": [ 00:09:35.737 { 00:09:35.737 "name": "BaseBdev1", 00:09:35.737 "uuid": "a759fd7a-42cf-11ef-96ac-773515fba644", 00:09:35.737 "is_configured": true, 00:09:35.737 "data_offset": 2048, 00:09:35.737 "data_size": 63488 00:09:35.737 }, 00:09:35.737 { 00:09:35.737 "name": null, 00:09:35.737 "uuid": "a538f040-42cf-11ef-96ac-773515fba644", 00:09:35.737 "is_configured": false, 00:09:35.737 "data_offset": 2048, 00:09:35.737 "data_size": 63488 00:09:35.737 }, 00:09:35.737 { 00:09:35.737 "name": "BaseBdev3", 00:09:35.737 "uuid": "a5b1ca21-42cf-11ef-96ac-773515fba644", 00:09:35.737 "is_configured": true, 00:09:35.737 "data_offset": 2048, 00:09:35.737 "data_size": 63488 00:09:35.737 } 00:09:35.737 ] 00:09:35.737 }' 00:09:35.737 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:35.737 17:28:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.994 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:35.994 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:36.251 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:09:36.251 17:28:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:09:36.511 [2024-07-15 17:28:32.221108] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:36.511 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:36.511 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:36.511 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:36.511 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:36.511 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:36.511 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:36.511 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:36.511 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:36.511 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:36.511 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:36.511 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:36.511 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.769 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:36.769 "name": "Existed_Raid", 00:09:36.769 "uuid": "a62d147f-42cf-11ef-96ac-773515fba644", 00:09:36.769 "strip_size_kb": 64, 00:09:36.769 "state": "configuring", 00:09:36.769 "raid_level": "raid0", 00:09:36.769 "superblock": true, 00:09:36.769 "num_base_bdevs": 3, 00:09:36.769 "num_base_bdevs_discovered": 1, 00:09:36.769 "num_base_bdevs_operational": 3, 00:09:36.769 "base_bdevs_list": [ 00:09:36.769 { 00:09:36.769 "name": "BaseBdev1", 00:09:36.769 "uuid": "a759fd7a-42cf-11ef-96ac-773515fba644", 00:09:36.769 "is_configured": true, 00:09:36.769 "data_offset": 2048, 00:09:36.769 "data_size": 63488 00:09:36.769 }, 00:09:36.769 { 00:09:36.769 "name": null, 00:09:36.769 "uuid": "a538f040-42cf-11ef-96ac-773515fba644", 00:09:36.769 "is_configured": false, 00:09:36.769 "data_offset": 2048, 00:09:36.769 "data_size": 63488 00:09:36.769 }, 00:09:36.769 { 00:09:36.769 "name": null, 00:09:36.769 "uuid": "a5b1ca21-42cf-11ef-96ac-773515fba644", 00:09:36.769 "is_configured": false, 00:09:36.769 "data_offset": 2048, 00:09:36.769 "data_size": 63488 00:09:36.769 } 00:09:36.769 ] 00:09:36.769 }' 00:09:36.769 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:36.769 17:28:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.026 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:37.026 17:28:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:37.590 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:09:37.590 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:37.590 [2024-07-15 17:28:33.357135] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:37.590 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:37.590 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:37.590 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:37.590 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:37.590 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:37.590 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:37.590 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:37.590 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:37.590 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:37.590 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:37.590 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:37.590 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.156 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:38.156 "name": "Existed_Raid", 00:09:38.156 "uuid": "a62d147f-42cf-11ef-96ac-773515fba644", 00:09:38.156 "strip_size_kb": 64, 00:09:38.156 "state": "configuring", 00:09:38.156 "raid_level": "raid0", 00:09:38.156 "superblock": true, 00:09:38.156 "num_base_bdevs": 3, 00:09:38.156 "num_base_bdevs_discovered": 2, 00:09:38.156 "num_base_bdevs_operational": 3, 00:09:38.156 "base_bdevs_list": [ 00:09:38.156 { 00:09:38.156 "name": "BaseBdev1", 00:09:38.156 "uuid": "a759fd7a-42cf-11ef-96ac-773515fba644", 00:09:38.156 "is_configured": true, 00:09:38.156 "data_offset": 2048, 00:09:38.156 "data_size": 63488 00:09:38.156 }, 00:09:38.156 { 00:09:38.156 "name": null, 00:09:38.156 "uuid": "a538f040-42cf-11ef-96ac-773515fba644", 00:09:38.156 "is_configured": false, 00:09:38.156 "data_offset": 2048, 00:09:38.156 "data_size": 63488 00:09:38.156 }, 00:09:38.156 { 00:09:38.156 "name": "BaseBdev3", 00:09:38.156 "uuid": "a5b1ca21-42cf-11ef-96ac-773515fba644", 00:09:38.156 "is_configured": true, 00:09:38.156 "data_offset": 2048, 00:09:38.156 "data_size": 63488 00:09:38.156 } 00:09:38.156 ] 00:09:38.156 }' 00:09:38.156 17:28:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:38.156 17:28:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:38.414 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:38.414 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:38.671 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:09:38.671 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:38.929 [2024-07-15 17:28:34.573161] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:38.929 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:38.929 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:38.929 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:38.929 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:38.929 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:38.929 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:38.929 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:38.929 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:38.929 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:38.929 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:38.929 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:38.929 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:39.187 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:39.187 "name": "Existed_Raid", 00:09:39.187 "uuid": "a62d147f-42cf-11ef-96ac-773515fba644", 00:09:39.187 "strip_size_kb": 64, 00:09:39.187 "state": "configuring", 00:09:39.187 "raid_level": "raid0", 00:09:39.187 "superblock": true, 00:09:39.187 "num_base_bdevs": 3, 00:09:39.187 "num_base_bdevs_discovered": 1, 00:09:39.187 "num_base_bdevs_operational": 3, 00:09:39.187 "base_bdevs_list": [ 00:09:39.187 { 00:09:39.187 "name": null, 00:09:39.187 "uuid": "a759fd7a-42cf-11ef-96ac-773515fba644", 00:09:39.187 "is_configured": false, 00:09:39.187 "data_offset": 2048, 00:09:39.187 "data_size": 63488 00:09:39.187 }, 00:09:39.187 { 00:09:39.187 "name": null, 00:09:39.187 "uuid": "a538f040-42cf-11ef-96ac-773515fba644", 00:09:39.187 "is_configured": false, 00:09:39.187 "data_offset": 2048, 00:09:39.187 "data_size": 63488 00:09:39.187 }, 00:09:39.187 { 00:09:39.187 "name": "BaseBdev3", 00:09:39.187 "uuid": "a5b1ca21-42cf-11ef-96ac-773515fba644", 00:09:39.187 "is_configured": true, 00:09:39.187 "data_offset": 2048, 00:09:39.187 "data_size": 63488 00:09:39.187 } 00:09:39.187 ] 00:09:39.187 }' 00:09:39.187 17:28:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:39.187 17:28:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.444 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:39.444 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:39.702 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:09:39.702 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:39.960 [2024-07-15 17:28:35.666984] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:39.960 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:39.960 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:39.960 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:39.960 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:39.960 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:39.960 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:39.960 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:39.960 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:39.960 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:39.960 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:39.960 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:39.960 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.216 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:40.216 "name": "Existed_Raid", 00:09:40.216 "uuid": "a62d147f-42cf-11ef-96ac-773515fba644", 00:09:40.216 "strip_size_kb": 64, 00:09:40.216 "state": "configuring", 00:09:40.216 "raid_level": "raid0", 00:09:40.217 "superblock": true, 00:09:40.217 "num_base_bdevs": 3, 00:09:40.217 "num_base_bdevs_discovered": 2, 00:09:40.217 "num_base_bdevs_operational": 3, 00:09:40.217 "base_bdevs_list": [ 00:09:40.217 { 00:09:40.217 "name": null, 00:09:40.217 "uuid": "a759fd7a-42cf-11ef-96ac-773515fba644", 00:09:40.217 "is_configured": false, 00:09:40.217 "data_offset": 2048, 00:09:40.217 "data_size": 63488 00:09:40.217 }, 00:09:40.217 { 00:09:40.217 "name": "BaseBdev2", 00:09:40.217 "uuid": "a538f040-42cf-11ef-96ac-773515fba644", 00:09:40.217 "is_configured": true, 00:09:40.217 "data_offset": 2048, 00:09:40.217 "data_size": 63488 00:09:40.217 }, 00:09:40.217 { 00:09:40.217 "name": "BaseBdev3", 00:09:40.217 "uuid": "a5b1ca21-42cf-11ef-96ac-773515fba644", 00:09:40.217 "is_configured": true, 00:09:40.217 "data_offset": 2048, 00:09:40.217 "data_size": 63488 00:09:40.217 } 00:09:40.217 ] 00:09:40.217 }' 00:09:40.217 17:28:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:40.217 17:28:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.475 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:40.475 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:40.732 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:09:40.732 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:40.732 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:41.298 17:28:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u a759fd7a-42cf-11ef-96ac-773515fba644 00:09:41.298 [2024-07-15 17:28:37.059113] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:41.298 [2024-07-15 17:28:37.059168] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x18e569e34a00 00:09:41.298 [2024-07-15 17:28:37.059174] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:41.298 [2024-07-15 17:28:37.059195] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18e569e97e20 00:09:41.298 [2024-07-15 17:28:37.059253] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x18e569e34a00 00:09:41.298 [2024-07-15 17:28:37.059258] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x18e569e34a00 00:09:41.298 [2024-07-15 17:28:37.059278] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.298 NewBaseBdev 00:09:41.298 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:09:41.298 17:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:09:41.298 17:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:41.298 17:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:41.298 17:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:41.298 17:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:41.298 17:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:41.555 17:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:41.814 [ 00:09:41.814 { 00:09:41.814 "name": "NewBaseBdev", 00:09:41.814 "aliases": [ 00:09:41.814 "a759fd7a-42cf-11ef-96ac-773515fba644" 00:09:41.814 ], 00:09:41.814 "product_name": "Malloc disk", 00:09:41.814 "block_size": 512, 00:09:41.814 "num_blocks": 65536, 00:09:41.814 "uuid": "a759fd7a-42cf-11ef-96ac-773515fba644", 00:09:41.814 "assigned_rate_limits": { 00:09:41.814 "rw_ios_per_sec": 0, 00:09:41.814 "rw_mbytes_per_sec": 0, 00:09:41.814 "r_mbytes_per_sec": 0, 00:09:41.814 "w_mbytes_per_sec": 0 00:09:41.814 }, 00:09:41.814 "claimed": true, 00:09:41.814 "claim_type": "exclusive_write", 00:09:41.814 "zoned": false, 00:09:41.814 "supported_io_types": { 00:09:41.814 "read": true, 00:09:41.814 "write": true, 00:09:41.814 "unmap": true, 00:09:41.814 "flush": true, 00:09:41.814 "reset": true, 00:09:41.814 "nvme_admin": false, 00:09:41.814 "nvme_io": false, 00:09:41.814 "nvme_io_md": false, 00:09:41.814 "write_zeroes": true, 00:09:41.814 "zcopy": true, 00:09:41.814 "get_zone_info": false, 00:09:41.814 "zone_management": false, 00:09:41.814 "zone_append": false, 00:09:41.814 "compare": false, 00:09:41.814 "compare_and_write": false, 00:09:41.814 "abort": true, 00:09:41.814 "seek_hole": false, 00:09:41.814 "seek_data": false, 00:09:41.814 "copy": true, 00:09:41.814 "nvme_iov_md": false 00:09:41.814 }, 00:09:41.814 "memory_domains": [ 00:09:41.814 { 00:09:41.814 "dma_device_id": "system", 00:09:41.814 "dma_device_type": 1 00:09:41.814 }, 00:09:41.814 { 00:09:41.814 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.814 "dma_device_type": 2 00:09:41.814 } 00:09:41.814 ], 00:09:41.814 "driver_specific": {} 00:09:41.814 } 00:09:41.814 ] 00:09:41.814 17:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:41.814 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:41.814 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:41.814 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:41.814 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:41.814 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:41.814 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:41.814 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:41.814 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:41.814 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:41.814 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:41.814 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:41.814 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.381 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:42.381 "name": "Existed_Raid", 00:09:42.381 "uuid": "a62d147f-42cf-11ef-96ac-773515fba644", 00:09:42.381 "strip_size_kb": 64, 00:09:42.381 "state": "online", 00:09:42.381 "raid_level": "raid0", 00:09:42.381 "superblock": true, 00:09:42.381 "num_base_bdevs": 3, 00:09:42.381 "num_base_bdevs_discovered": 3, 00:09:42.381 "num_base_bdevs_operational": 3, 00:09:42.381 "base_bdevs_list": [ 00:09:42.381 { 00:09:42.381 "name": "NewBaseBdev", 00:09:42.381 "uuid": "a759fd7a-42cf-11ef-96ac-773515fba644", 00:09:42.381 "is_configured": true, 00:09:42.381 "data_offset": 2048, 00:09:42.381 "data_size": 63488 00:09:42.381 }, 00:09:42.381 { 00:09:42.381 "name": "BaseBdev2", 00:09:42.381 "uuid": "a538f040-42cf-11ef-96ac-773515fba644", 00:09:42.381 "is_configured": true, 00:09:42.381 "data_offset": 2048, 00:09:42.381 "data_size": 63488 00:09:42.381 }, 00:09:42.381 { 00:09:42.381 "name": "BaseBdev3", 00:09:42.381 "uuid": "a5b1ca21-42cf-11ef-96ac-773515fba644", 00:09:42.381 "is_configured": true, 00:09:42.381 "data_offset": 2048, 00:09:42.381 "data_size": 63488 00:09:42.381 } 00:09:42.381 ] 00:09:42.381 }' 00:09:42.381 17:28:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:42.381 17:28:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.661 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:09:42.661 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:42.661 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:42.661 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:42.661 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:42.661 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:09:42.661 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:42.661 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:42.661 [2024-07-15 17:28:38.431038] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:42.661 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:42.661 "name": "Existed_Raid", 00:09:42.661 "aliases": [ 00:09:42.661 "a62d147f-42cf-11ef-96ac-773515fba644" 00:09:42.661 ], 00:09:42.661 "product_name": "Raid Volume", 00:09:42.661 "block_size": 512, 00:09:42.661 "num_blocks": 190464, 00:09:42.661 "uuid": "a62d147f-42cf-11ef-96ac-773515fba644", 00:09:42.661 "assigned_rate_limits": { 00:09:42.661 "rw_ios_per_sec": 0, 00:09:42.661 "rw_mbytes_per_sec": 0, 00:09:42.661 "r_mbytes_per_sec": 0, 00:09:42.661 "w_mbytes_per_sec": 0 00:09:42.661 }, 00:09:42.661 "claimed": false, 00:09:42.661 "zoned": false, 00:09:42.661 "supported_io_types": { 00:09:42.661 "read": true, 00:09:42.661 "write": true, 00:09:42.661 "unmap": true, 00:09:42.661 "flush": true, 00:09:42.661 "reset": true, 00:09:42.661 "nvme_admin": false, 00:09:42.661 "nvme_io": false, 00:09:42.661 "nvme_io_md": false, 00:09:42.661 "write_zeroes": true, 00:09:42.661 "zcopy": false, 00:09:42.661 "get_zone_info": false, 00:09:42.661 "zone_management": false, 00:09:42.661 "zone_append": false, 00:09:42.661 "compare": false, 00:09:42.661 "compare_and_write": false, 00:09:42.661 "abort": false, 00:09:42.661 "seek_hole": false, 00:09:42.661 "seek_data": false, 00:09:42.661 "copy": false, 00:09:42.661 "nvme_iov_md": false 00:09:42.661 }, 00:09:42.661 "memory_domains": [ 00:09:42.661 { 00:09:42.661 "dma_device_id": "system", 00:09:42.661 "dma_device_type": 1 00:09:42.661 }, 00:09:42.661 { 00:09:42.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.661 "dma_device_type": 2 00:09:42.661 }, 00:09:42.661 { 00:09:42.661 "dma_device_id": "system", 00:09:42.661 "dma_device_type": 1 00:09:42.661 }, 00:09:42.661 { 00:09:42.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.661 "dma_device_type": 2 00:09:42.661 }, 00:09:42.661 { 00:09:42.661 "dma_device_id": "system", 00:09:42.661 "dma_device_type": 1 00:09:42.661 }, 00:09:42.661 { 00:09:42.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.661 "dma_device_type": 2 00:09:42.661 } 00:09:42.661 ], 00:09:42.661 "driver_specific": { 00:09:42.661 "raid": { 00:09:42.661 "uuid": "a62d147f-42cf-11ef-96ac-773515fba644", 00:09:42.661 "strip_size_kb": 64, 00:09:42.661 "state": "online", 00:09:42.661 "raid_level": "raid0", 00:09:42.661 "superblock": true, 00:09:42.661 "num_base_bdevs": 3, 00:09:42.661 "num_base_bdevs_discovered": 3, 00:09:42.661 "num_base_bdevs_operational": 3, 00:09:42.661 "base_bdevs_list": [ 00:09:42.661 { 00:09:42.661 "name": "NewBaseBdev", 00:09:42.661 "uuid": "a759fd7a-42cf-11ef-96ac-773515fba644", 00:09:42.661 "is_configured": true, 00:09:42.661 "data_offset": 2048, 00:09:42.661 "data_size": 63488 00:09:42.661 }, 00:09:42.661 { 00:09:42.661 "name": "BaseBdev2", 00:09:42.661 "uuid": "a538f040-42cf-11ef-96ac-773515fba644", 00:09:42.661 "is_configured": true, 00:09:42.661 "data_offset": 2048, 00:09:42.661 "data_size": 63488 00:09:42.661 }, 00:09:42.661 { 00:09:42.661 "name": "BaseBdev3", 00:09:42.661 "uuid": "a5b1ca21-42cf-11ef-96ac-773515fba644", 00:09:42.661 "is_configured": true, 00:09:42.661 "data_offset": 2048, 00:09:42.661 "data_size": 63488 00:09:42.661 } 00:09:42.661 ] 00:09:42.661 } 00:09:42.661 } 00:09:42.661 }' 00:09:42.661 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:42.661 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:09:42.661 BaseBdev2 00:09:42.661 BaseBdev3' 00:09:42.661 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:42.661 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:09:42.661 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:42.920 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:42.920 "name": "NewBaseBdev", 00:09:42.920 "aliases": [ 00:09:42.920 "a759fd7a-42cf-11ef-96ac-773515fba644" 00:09:42.920 ], 00:09:42.920 "product_name": "Malloc disk", 00:09:42.920 "block_size": 512, 00:09:42.920 "num_blocks": 65536, 00:09:42.920 "uuid": "a759fd7a-42cf-11ef-96ac-773515fba644", 00:09:42.920 "assigned_rate_limits": { 00:09:42.920 "rw_ios_per_sec": 0, 00:09:42.920 "rw_mbytes_per_sec": 0, 00:09:42.920 "r_mbytes_per_sec": 0, 00:09:42.920 "w_mbytes_per_sec": 0 00:09:42.920 }, 00:09:42.920 "claimed": true, 00:09:42.920 "claim_type": "exclusive_write", 00:09:42.920 "zoned": false, 00:09:42.920 "supported_io_types": { 00:09:42.920 "read": true, 00:09:42.920 "write": true, 00:09:42.920 "unmap": true, 00:09:42.920 "flush": true, 00:09:42.920 "reset": true, 00:09:42.920 "nvme_admin": false, 00:09:42.920 "nvme_io": false, 00:09:42.920 "nvme_io_md": false, 00:09:42.920 "write_zeroes": true, 00:09:42.920 "zcopy": true, 00:09:42.920 "get_zone_info": false, 00:09:42.920 "zone_management": false, 00:09:42.920 "zone_append": false, 00:09:42.920 "compare": false, 00:09:42.920 "compare_and_write": false, 00:09:42.920 "abort": true, 00:09:42.920 "seek_hole": false, 00:09:42.920 "seek_data": false, 00:09:42.920 "copy": true, 00:09:42.920 "nvme_iov_md": false 00:09:42.920 }, 00:09:42.920 "memory_domains": [ 00:09:42.920 { 00:09:42.920 "dma_device_id": "system", 00:09:42.920 "dma_device_type": 1 00:09:42.920 }, 00:09:42.920 { 00:09:42.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.920 "dma_device_type": 2 00:09:42.920 } 00:09:42.920 ], 00:09:42.920 "driver_specific": {} 00:09:42.920 }' 00:09:42.920 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:43.179 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:43.179 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:43.179 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:43.179 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:43.179 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:43.179 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:43.179 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:43.179 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:43.179 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:43.179 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:43.179 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:43.179 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:43.179 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:43.179 17:28:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:43.437 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:43.437 "name": "BaseBdev2", 00:09:43.437 "aliases": [ 00:09:43.437 "a538f040-42cf-11ef-96ac-773515fba644" 00:09:43.437 ], 00:09:43.437 "product_name": "Malloc disk", 00:09:43.437 "block_size": 512, 00:09:43.437 "num_blocks": 65536, 00:09:43.437 "uuid": "a538f040-42cf-11ef-96ac-773515fba644", 00:09:43.437 "assigned_rate_limits": { 00:09:43.437 "rw_ios_per_sec": 0, 00:09:43.437 "rw_mbytes_per_sec": 0, 00:09:43.437 "r_mbytes_per_sec": 0, 00:09:43.437 "w_mbytes_per_sec": 0 00:09:43.437 }, 00:09:43.437 "claimed": true, 00:09:43.437 "claim_type": "exclusive_write", 00:09:43.437 "zoned": false, 00:09:43.437 "supported_io_types": { 00:09:43.437 "read": true, 00:09:43.437 "write": true, 00:09:43.437 "unmap": true, 00:09:43.437 "flush": true, 00:09:43.437 "reset": true, 00:09:43.437 "nvme_admin": false, 00:09:43.437 "nvme_io": false, 00:09:43.437 "nvme_io_md": false, 00:09:43.437 "write_zeroes": true, 00:09:43.437 "zcopy": true, 00:09:43.437 "get_zone_info": false, 00:09:43.437 "zone_management": false, 00:09:43.437 "zone_append": false, 00:09:43.437 "compare": false, 00:09:43.437 "compare_and_write": false, 00:09:43.437 "abort": true, 00:09:43.437 "seek_hole": false, 00:09:43.437 "seek_data": false, 00:09:43.437 "copy": true, 00:09:43.437 "nvme_iov_md": false 00:09:43.437 }, 00:09:43.437 "memory_domains": [ 00:09:43.437 { 00:09:43.437 "dma_device_id": "system", 00:09:43.437 "dma_device_type": 1 00:09:43.437 }, 00:09:43.437 { 00:09:43.437 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.437 "dma_device_type": 2 00:09:43.437 } 00:09:43.437 ], 00:09:43.437 "driver_specific": {} 00:09:43.437 }' 00:09:43.437 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:43.437 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:43.437 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:43.437 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:43.437 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:43.437 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:43.437 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:43.437 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:43.437 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:43.437 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:43.437 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:43.437 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:43.437 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:43.437 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:09:43.437 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:43.695 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:43.695 "name": "BaseBdev3", 00:09:43.695 "aliases": [ 00:09:43.695 "a5b1ca21-42cf-11ef-96ac-773515fba644" 00:09:43.695 ], 00:09:43.695 "product_name": "Malloc disk", 00:09:43.695 "block_size": 512, 00:09:43.695 "num_blocks": 65536, 00:09:43.695 "uuid": "a5b1ca21-42cf-11ef-96ac-773515fba644", 00:09:43.695 "assigned_rate_limits": { 00:09:43.695 "rw_ios_per_sec": 0, 00:09:43.695 "rw_mbytes_per_sec": 0, 00:09:43.695 "r_mbytes_per_sec": 0, 00:09:43.695 "w_mbytes_per_sec": 0 00:09:43.695 }, 00:09:43.695 "claimed": true, 00:09:43.695 "claim_type": "exclusive_write", 00:09:43.695 "zoned": false, 00:09:43.695 "supported_io_types": { 00:09:43.695 "read": true, 00:09:43.695 "write": true, 00:09:43.695 "unmap": true, 00:09:43.695 "flush": true, 00:09:43.695 "reset": true, 00:09:43.695 "nvme_admin": false, 00:09:43.695 "nvme_io": false, 00:09:43.695 "nvme_io_md": false, 00:09:43.695 "write_zeroes": true, 00:09:43.695 "zcopy": true, 00:09:43.695 "get_zone_info": false, 00:09:43.695 "zone_management": false, 00:09:43.695 "zone_append": false, 00:09:43.695 "compare": false, 00:09:43.695 "compare_and_write": false, 00:09:43.695 "abort": true, 00:09:43.695 "seek_hole": false, 00:09:43.695 "seek_data": false, 00:09:43.695 "copy": true, 00:09:43.695 "nvme_iov_md": false 00:09:43.695 }, 00:09:43.695 "memory_domains": [ 00:09:43.695 { 00:09:43.695 "dma_device_id": "system", 00:09:43.695 "dma_device_type": 1 00:09:43.695 }, 00:09:43.695 { 00:09:43.695 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.695 "dma_device_type": 2 00:09:43.695 } 00:09:43.695 ], 00:09:43.695 "driver_specific": {} 00:09:43.695 }' 00:09:43.695 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:43.695 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:43.695 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:43.695 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:43.695 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:43.695 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:43.695 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:43.695 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:43.695 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:43.695 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:43.695 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:43.695 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:43.695 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:43.953 [2024-07-15 17:28:39.687007] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:43.953 [2024-07-15 17:28:39.687032] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:43.953 [2024-07-15 17:28:39.687072] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:43.953 [2024-07-15 17:28:39.687086] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:43.954 [2024-07-15 17:28:39.687090] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18e569e34a00 name Existed_Raid, state offline 00:09:43.954 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 52677 00:09:43.954 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 52677 ']' 00:09:43.954 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 52677 00:09:43.954 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:09:43.954 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:09:43.954 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 52677 00:09:43.954 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:09:43.954 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:09:43.954 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:09:43.954 killing process with pid 52677 00:09:43.954 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 52677' 00:09:43.954 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 52677 00:09:43.954 [2024-07-15 17:28:39.714323] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:43.954 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 52677 00:09:43.954 [2024-07-15 17:28:39.732437] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:44.213 17:28:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:09:44.213 00:09:44.213 real 0m24.185s 00:09:44.213 user 0m44.358s 00:09:44.213 sys 0m3.150s 00:09:44.213 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:44.213 17:28:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:44.213 ************************************ 00:09:44.213 END TEST raid_state_function_test_sb 00:09:44.213 ************************************ 00:09:44.213 17:28:39 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:44.213 17:28:39 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:09:44.213 17:28:39 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:44.213 17:28:39 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:44.213 17:28:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:44.213 ************************************ 00:09:44.213 START TEST raid_superblock_test 00:09:44.213 ************************************ 00:09:44.213 17:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 3 00:09:44.213 17:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:09:44.213 17:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:09:44.213 17:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:09:44.213 17:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:09:44.213 17:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:09:44.213 17:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:09:44.213 17:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:09:44.213 17:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:09:44.213 17:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:09:44.213 17:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:09:44.213 17:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:09:44.213 17:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:09:44.213 17:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:09:44.213 17:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:09:44.213 17:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:09:44.213 17:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:09:44.213 17:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=53405 00:09:44.213 17:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 53405 /var/tmp/spdk-raid.sock 00:09:44.213 17:28:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:09:44.213 17:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 53405 ']' 00:09:44.213 17:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:44.213 17:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:44.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:44.213 17:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:44.213 17:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:44.213 17:28:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.213 [2024-07-15 17:28:39.971960] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:09:44.213 [2024-07-15 17:28:39.972173] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:09:44.780 EAL: TSC is not safe to use in SMP mode 00:09:44.780 EAL: TSC is not invariant 00:09:44.780 [2024-07-15 17:28:40.523122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.039 [2024-07-15 17:28:40.612791] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:45.039 [2024-07-15 17:28:40.614920] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.039 [2024-07-15 17:28:40.615674] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.039 [2024-07-15 17:28:40.615689] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.298 17:28:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:45.298 17:28:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:09:45.298 17:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:09:45.298 17:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:09:45.298 17:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:09:45.298 17:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:09:45.298 17:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:45.298 17:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:45.298 17:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:09:45.298 17:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:45.298 17:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:09:45.557 malloc1 00:09:45.557 17:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:45.816 [2024-07-15 17:28:41.576742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:45.816 [2024-07-15 17:28:41.576806] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.816 [2024-07-15 17:28:41.576819] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3c3431c34780 00:09:45.816 [2024-07-15 17:28:41.576827] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.816 [2024-07-15 17:28:41.577731] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.816 [2024-07-15 17:28:41.577760] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:45.816 pt1 00:09:45.816 17:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:09:45.816 17:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:09:45.816 17:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:09:45.816 17:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:09:45.816 17:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:45.816 17:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:45.816 17:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:09:45.816 17:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:45.816 17:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:09:46.074 malloc2 00:09:46.074 17:28:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:46.333 [2024-07-15 17:28:42.060742] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:46.333 [2024-07-15 17:28:42.060803] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.333 [2024-07-15 17:28:42.060816] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3c3431c34c80 00:09:46.333 [2024-07-15 17:28:42.060824] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.333 [2024-07-15 17:28:42.061470] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.333 [2024-07-15 17:28:42.061500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:46.333 pt2 00:09:46.333 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:09:46.333 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:09:46.333 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:09:46.333 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:09:46.333 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:46.333 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:46.333 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:09:46.333 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:46.333 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:09:46.593 malloc3 00:09:46.593 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:46.853 [2024-07-15 17:28:42.596751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:46.853 [2024-07-15 17:28:42.596811] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:46.853 [2024-07-15 17:28:42.596824] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3c3431c35180 00:09:46.853 [2024-07-15 17:28:42.596832] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:46.853 [2024-07-15 17:28:42.597485] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:46.853 [2024-07-15 17:28:42.597515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:46.853 pt3 00:09:46.853 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:09:46.853 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:09:46.853 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:09:47.111 [2024-07-15 17:28:42.872770] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:47.111 [2024-07-15 17:28:42.873350] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:47.111 [2024-07-15 17:28:42.873375] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:47.111 [2024-07-15 17:28:42.873429] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3c3431c35400 00:09:47.111 [2024-07-15 17:28:42.873435] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:47.111 [2024-07-15 17:28:42.873469] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3c3431c97e20 00:09:47.111 [2024-07-15 17:28:42.873546] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3c3431c35400 00:09:47.111 [2024-07-15 17:28:42.873551] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3c3431c35400 00:09:47.111 [2024-07-15 17:28:42.873578] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.111 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:47.111 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:47.111 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:47.112 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:47.112 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:47.112 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:47.112 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:47.112 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:47.112 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:47.112 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:47.112 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:47.112 17:28:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:47.370 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:47.370 "name": "raid_bdev1", 00:09:47.370 "uuid": "aeb7ed2c-42cf-11ef-96ac-773515fba644", 00:09:47.370 "strip_size_kb": 64, 00:09:47.370 "state": "online", 00:09:47.370 "raid_level": "raid0", 00:09:47.370 "superblock": true, 00:09:47.370 "num_base_bdevs": 3, 00:09:47.370 "num_base_bdevs_discovered": 3, 00:09:47.370 "num_base_bdevs_operational": 3, 00:09:47.370 "base_bdevs_list": [ 00:09:47.370 { 00:09:47.370 "name": "pt1", 00:09:47.370 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:47.370 "is_configured": true, 00:09:47.370 "data_offset": 2048, 00:09:47.370 "data_size": 63488 00:09:47.370 }, 00:09:47.370 { 00:09:47.370 "name": "pt2", 00:09:47.370 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.370 "is_configured": true, 00:09:47.370 "data_offset": 2048, 00:09:47.370 "data_size": 63488 00:09:47.370 }, 00:09:47.370 { 00:09:47.370 "name": "pt3", 00:09:47.370 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:47.370 "is_configured": true, 00:09:47.370 "data_offset": 2048, 00:09:47.370 "data_size": 63488 00:09:47.370 } 00:09:47.370 ] 00:09:47.370 }' 00:09:47.370 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:47.370 17:28:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.939 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:09:47.939 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:09:47.940 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:47.940 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:47.940 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:47.940 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:47.940 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:47.940 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:47.940 [2024-07-15 17:28:43.720807] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:47.940 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:47.940 "name": "raid_bdev1", 00:09:47.940 "aliases": [ 00:09:47.940 "aeb7ed2c-42cf-11ef-96ac-773515fba644" 00:09:47.940 ], 00:09:47.940 "product_name": "Raid Volume", 00:09:47.940 "block_size": 512, 00:09:47.940 "num_blocks": 190464, 00:09:47.940 "uuid": "aeb7ed2c-42cf-11ef-96ac-773515fba644", 00:09:47.940 "assigned_rate_limits": { 00:09:47.940 "rw_ios_per_sec": 0, 00:09:47.940 "rw_mbytes_per_sec": 0, 00:09:47.940 "r_mbytes_per_sec": 0, 00:09:47.940 "w_mbytes_per_sec": 0 00:09:47.940 }, 00:09:47.940 "claimed": false, 00:09:47.940 "zoned": false, 00:09:47.940 "supported_io_types": { 00:09:47.940 "read": true, 00:09:47.940 "write": true, 00:09:47.940 "unmap": true, 00:09:47.940 "flush": true, 00:09:47.940 "reset": true, 00:09:47.940 "nvme_admin": false, 00:09:47.940 "nvme_io": false, 00:09:47.940 "nvme_io_md": false, 00:09:47.940 "write_zeroes": true, 00:09:47.940 "zcopy": false, 00:09:47.940 "get_zone_info": false, 00:09:47.940 "zone_management": false, 00:09:47.940 "zone_append": false, 00:09:47.940 "compare": false, 00:09:47.940 "compare_and_write": false, 00:09:47.940 "abort": false, 00:09:47.940 "seek_hole": false, 00:09:47.940 "seek_data": false, 00:09:47.940 "copy": false, 00:09:47.940 "nvme_iov_md": false 00:09:47.940 }, 00:09:47.940 "memory_domains": [ 00:09:47.940 { 00:09:47.940 "dma_device_id": "system", 00:09:47.940 "dma_device_type": 1 00:09:47.940 }, 00:09:47.940 { 00:09:47.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.940 "dma_device_type": 2 00:09:47.940 }, 00:09:47.940 { 00:09:47.940 "dma_device_id": "system", 00:09:47.940 "dma_device_type": 1 00:09:47.940 }, 00:09:47.940 { 00:09:47.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.940 "dma_device_type": 2 00:09:47.940 }, 00:09:47.940 { 00:09:47.940 "dma_device_id": "system", 00:09:47.940 "dma_device_type": 1 00:09:47.940 }, 00:09:47.940 { 00:09:47.940 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.940 "dma_device_type": 2 00:09:47.940 } 00:09:47.940 ], 00:09:47.940 "driver_specific": { 00:09:47.940 "raid": { 00:09:47.940 "uuid": "aeb7ed2c-42cf-11ef-96ac-773515fba644", 00:09:47.940 "strip_size_kb": 64, 00:09:47.940 "state": "online", 00:09:47.940 "raid_level": "raid0", 00:09:47.940 "superblock": true, 00:09:47.940 "num_base_bdevs": 3, 00:09:47.940 "num_base_bdevs_discovered": 3, 00:09:47.940 "num_base_bdevs_operational": 3, 00:09:47.940 "base_bdevs_list": [ 00:09:47.940 { 00:09:47.940 "name": "pt1", 00:09:47.940 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:47.940 "is_configured": true, 00:09:47.940 "data_offset": 2048, 00:09:47.940 "data_size": 63488 00:09:47.940 }, 00:09:47.940 { 00:09:47.940 "name": "pt2", 00:09:47.940 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:47.940 "is_configured": true, 00:09:47.940 "data_offset": 2048, 00:09:47.940 "data_size": 63488 00:09:47.940 }, 00:09:47.940 { 00:09:47.940 "name": "pt3", 00:09:47.940 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:47.940 "is_configured": true, 00:09:47.940 "data_offset": 2048, 00:09:47.940 "data_size": 63488 00:09:47.940 } 00:09:47.940 ] 00:09:47.940 } 00:09:47.940 } 00:09:47.940 }' 00:09:47.940 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:47.940 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:09:47.940 pt2 00:09:47.940 pt3' 00:09:47.940 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:47.940 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:09:47.940 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:48.198 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:48.198 "name": "pt1", 00:09:48.198 "aliases": [ 00:09:48.198 "00000000-0000-0000-0000-000000000001" 00:09:48.198 ], 00:09:48.198 "product_name": "passthru", 00:09:48.198 "block_size": 512, 00:09:48.198 "num_blocks": 65536, 00:09:48.198 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:48.198 "assigned_rate_limits": { 00:09:48.198 "rw_ios_per_sec": 0, 00:09:48.198 "rw_mbytes_per_sec": 0, 00:09:48.198 "r_mbytes_per_sec": 0, 00:09:48.198 "w_mbytes_per_sec": 0 00:09:48.198 }, 00:09:48.198 "claimed": true, 00:09:48.198 "claim_type": "exclusive_write", 00:09:48.198 "zoned": false, 00:09:48.198 "supported_io_types": { 00:09:48.198 "read": true, 00:09:48.198 "write": true, 00:09:48.198 "unmap": true, 00:09:48.198 "flush": true, 00:09:48.198 "reset": true, 00:09:48.198 "nvme_admin": false, 00:09:48.198 "nvme_io": false, 00:09:48.198 "nvme_io_md": false, 00:09:48.198 "write_zeroes": true, 00:09:48.198 "zcopy": true, 00:09:48.198 "get_zone_info": false, 00:09:48.198 "zone_management": false, 00:09:48.198 "zone_append": false, 00:09:48.198 "compare": false, 00:09:48.198 "compare_and_write": false, 00:09:48.198 "abort": true, 00:09:48.198 "seek_hole": false, 00:09:48.198 "seek_data": false, 00:09:48.198 "copy": true, 00:09:48.198 "nvme_iov_md": false 00:09:48.198 }, 00:09:48.198 "memory_domains": [ 00:09:48.198 { 00:09:48.198 "dma_device_id": "system", 00:09:48.198 "dma_device_type": 1 00:09:48.198 }, 00:09:48.198 { 00:09:48.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.198 "dma_device_type": 2 00:09:48.198 } 00:09:48.198 ], 00:09:48.198 "driver_specific": { 00:09:48.198 "passthru": { 00:09:48.198 "name": "pt1", 00:09:48.198 "base_bdev_name": "malloc1" 00:09:48.198 } 00:09:48.198 } 00:09:48.198 }' 00:09:48.198 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:48.198 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:48.198 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:48.198 17:28:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:48.198 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:48.198 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:48.198 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:48.198 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:48.198 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:48.198 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:48.458 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:48.458 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:48.458 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:48.458 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:09:48.458 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:48.458 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:48.458 "name": "pt2", 00:09:48.458 "aliases": [ 00:09:48.458 "00000000-0000-0000-0000-000000000002" 00:09:48.458 ], 00:09:48.458 "product_name": "passthru", 00:09:48.458 "block_size": 512, 00:09:48.458 "num_blocks": 65536, 00:09:48.458 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:48.458 "assigned_rate_limits": { 00:09:48.458 "rw_ios_per_sec": 0, 00:09:48.458 "rw_mbytes_per_sec": 0, 00:09:48.458 "r_mbytes_per_sec": 0, 00:09:48.458 "w_mbytes_per_sec": 0 00:09:48.458 }, 00:09:48.458 "claimed": true, 00:09:48.458 "claim_type": "exclusive_write", 00:09:48.458 "zoned": false, 00:09:48.459 "supported_io_types": { 00:09:48.459 "read": true, 00:09:48.459 "write": true, 00:09:48.459 "unmap": true, 00:09:48.459 "flush": true, 00:09:48.459 "reset": true, 00:09:48.459 "nvme_admin": false, 00:09:48.459 "nvme_io": false, 00:09:48.459 "nvme_io_md": false, 00:09:48.459 "write_zeroes": true, 00:09:48.459 "zcopy": true, 00:09:48.459 "get_zone_info": false, 00:09:48.459 "zone_management": false, 00:09:48.459 "zone_append": false, 00:09:48.459 "compare": false, 00:09:48.459 "compare_and_write": false, 00:09:48.459 "abort": true, 00:09:48.459 "seek_hole": false, 00:09:48.459 "seek_data": false, 00:09:48.459 "copy": true, 00:09:48.459 "nvme_iov_md": false 00:09:48.459 }, 00:09:48.459 "memory_domains": [ 00:09:48.459 { 00:09:48.459 "dma_device_id": "system", 00:09:48.459 "dma_device_type": 1 00:09:48.459 }, 00:09:48.459 { 00:09:48.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.459 "dma_device_type": 2 00:09:48.459 } 00:09:48.459 ], 00:09:48.459 "driver_specific": { 00:09:48.459 "passthru": { 00:09:48.459 "name": "pt2", 00:09:48.459 "base_bdev_name": "malloc2" 00:09:48.459 } 00:09:48.459 } 00:09:48.459 }' 00:09:48.459 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:48.717 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:48.717 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:48.717 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:48.717 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:48.717 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:48.717 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:48.717 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:48.717 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:48.717 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:48.717 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:48.717 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:48.717 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:48.717 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:09:48.717 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:48.976 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:48.976 "name": "pt3", 00:09:48.976 "aliases": [ 00:09:48.976 "00000000-0000-0000-0000-000000000003" 00:09:48.976 ], 00:09:48.976 "product_name": "passthru", 00:09:48.976 "block_size": 512, 00:09:48.976 "num_blocks": 65536, 00:09:48.976 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:48.976 "assigned_rate_limits": { 00:09:48.976 "rw_ios_per_sec": 0, 00:09:48.976 "rw_mbytes_per_sec": 0, 00:09:48.976 "r_mbytes_per_sec": 0, 00:09:48.976 "w_mbytes_per_sec": 0 00:09:48.976 }, 00:09:48.976 "claimed": true, 00:09:48.976 "claim_type": "exclusive_write", 00:09:48.976 "zoned": false, 00:09:48.976 "supported_io_types": { 00:09:48.976 "read": true, 00:09:48.976 "write": true, 00:09:48.976 "unmap": true, 00:09:48.976 "flush": true, 00:09:48.976 "reset": true, 00:09:48.976 "nvme_admin": false, 00:09:48.976 "nvme_io": false, 00:09:48.976 "nvme_io_md": false, 00:09:48.976 "write_zeroes": true, 00:09:48.976 "zcopy": true, 00:09:48.976 "get_zone_info": false, 00:09:48.976 "zone_management": false, 00:09:48.976 "zone_append": false, 00:09:48.976 "compare": false, 00:09:48.976 "compare_and_write": false, 00:09:48.976 "abort": true, 00:09:48.976 "seek_hole": false, 00:09:48.976 "seek_data": false, 00:09:48.976 "copy": true, 00:09:48.976 "nvme_iov_md": false 00:09:48.976 }, 00:09:48.976 "memory_domains": [ 00:09:48.976 { 00:09:48.976 "dma_device_id": "system", 00:09:48.976 "dma_device_type": 1 00:09:48.976 }, 00:09:48.976 { 00:09:48.976 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.976 "dma_device_type": 2 00:09:48.976 } 00:09:48.976 ], 00:09:48.976 "driver_specific": { 00:09:48.976 "passthru": { 00:09:48.976 "name": "pt3", 00:09:48.977 "base_bdev_name": "malloc3" 00:09:48.977 } 00:09:48.977 } 00:09:48.977 }' 00:09:48.977 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:48.977 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:48.977 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:48.977 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:48.977 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:48.977 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:48.977 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:48.977 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:48.977 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:48.977 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:48.977 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:48.977 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:48.977 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:48.977 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:09:49.236 [2024-07-15 17:28:44.880836] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:49.236 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=aeb7ed2c-42cf-11ef-96ac-773515fba644 00:09:49.236 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z aeb7ed2c-42cf-11ef-96ac-773515fba644 ']' 00:09:49.236 17:28:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:49.494 [2024-07-15 17:28:45.184783] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:49.494 [2024-07-15 17:28:45.184806] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:49.494 [2024-07-15 17:28:45.184830] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:49.494 [2024-07-15 17:28:45.184844] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:49.494 [2024-07-15 17:28:45.184849] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3c3431c35400 name raid_bdev1, state offline 00:09:49.494 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:49.494 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:09:49.753 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:09:49.753 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:09:49.753 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:09:49.753 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:09:50.012 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:09:50.012 17:28:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:50.271 17:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:09:50.271 17:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:09:50.530 17:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:09:50.530 17:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:50.789 17:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:09:50.789 17:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:09:50.789 17:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:09:50.789 17:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:09:50.789 17:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:50.789 17:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:50.789 17:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:50.789 17:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:50.789 17:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:50.789 17:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:50.789 17:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:50.789 17:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:50.789 17:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:09:51.047 [2024-07-15 17:28:46.740846] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:51.047 [2024-07-15 17:28:46.741424] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:51.047 [2024-07-15 17:28:46.741444] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:51.048 [2024-07-15 17:28:46.741459] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:51.048 [2024-07-15 17:28:46.741495] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:51.048 [2024-07-15 17:28:46.741507] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:51.048 [2024-07-15 17:28:46.741515] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:51.048 [2024-07-15 17:28:46.741520] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3c3431c35180 name raid_bdev1, state configuring 00:09:51.048 request: 00:09:51.048 { 00:09:51.048 "name": "raid_bdev1", 00:09:51.048 "raid_level": "raid0", 00:09:51.048 "base_bdevs": [ 00:09:51.048 "malloc1", 00:09:51.048 "malloc2", 00:09:51.048 "malloc3" 00:09:51.048 ], 00:09:51.048 "strip_size_kb": 64, 00:09:51.048 "superblock": false, 00:09:51.048 "method": "bdev_raid_create", 00:09:51.048 "req_id": 1 00:09:51.048 } 00:09:51.048 Got JSON-RPC error response 00:09:51.048 response: 00:09:51.048 { 00:09:51.048 "code": -17, 00:09:51.048 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:51.048 } 00:09:51.048 17:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:09:51.048 17:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:51.048 17:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:51.048 17:28:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:51.048 17:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:51.048 17:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:09:51.306 17:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:09:51.307 17:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:09:51.307 17:28:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:51.580 [2024-07-15 17:28:47.244845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:51.580 [2024-07-15 17:28:47.244900] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.580 [2024-07-15 17:28:47.244913] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3c3431c34c80 00:09:51.580 [2024-07-15 17:28:47.244921] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.580 [2024-07-15 17:28:47.245575] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.580 [2024-07-15 17:28:47.245603] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:51.580 [2024-07-15 17:28:47.245628] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:51.580 [2024-07-15 17:28:47.245640] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:51.580 pt1 00:09:51.580 17:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:51.580 17:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:51.580 17:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:51.580 17:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:51.580 17:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:51.580 17:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:51.580 17:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:51.580 17:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:51.580 17:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:51.580 17:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:51.580 17:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:51.580 17:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.873 17:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:51.873 "name": "raid_bdev1", 00:09:51.873 "uuid": "aeb7ed2c-42cf-11ef-96ac-773515fba644", 00:09:51.873 "strip_size_kb": 64, 00:09:51.873 "state": "configuring", 00:09:51.873 "raid_level": "raid0", 00:09:51.873 "superblock": true, 00:09:51.873 "num_base_bdevs": 3, 00:09:51.873 "num_base_bdevs_discovered": 1, 00:09:51.873 "num_base_bdevs_operational": 3, 00:09:51.873 "base_bdevs_list": [ 00:09:51.873 { 00:09:51.873 "name": "pt1", 00:09:51.873 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:51.873 "is_configured": true, 00:09:51.873 "data_offset": 2048, 00:09:51.873 "data_size": 63488 00:09:51.873 }, 00:09:51.873 { 00:09:51.873 "name": null, 00:09:51.873 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:51.873 "is_configured": false, 00:09:51.873 "data_offset": 2048, 00:09:51.873 "data_size": 63488 00:09:51.873 }, 00:09:51.873 { 00:09:51.873 "name": null, 00:09:51.873 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:51.873 "is_configured": false, 00:09:51.873 "data_offset": 2048, 00:09:51.873 "data_size": 63488 00:09:51.873 } 00:09:51.873 ] 00:09:51.873 }' 00:09:51.873 17:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:51.873 17:28:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.132 17:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:09:52.132 17:28:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:52.390 [2024-07-15 17:28:48.128859] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:52.390 [2024-07-15 17:28:48.128913] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:52.390 [2024-07-15 17:28:48.128926] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3c3431c35680 00:09:52.390 [2024-07-15 17:28:48.128933] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:52.390 [2024-07-15 17:28:48.129048] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:52.390 [2024-07-15 17:28:48.129059] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:52.390 [2024-07-15 17:28:48.129082] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:52.390 [2024-07-15 17:28:48.129090] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:52.390 pt2 00:09:52.390 17:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:52.649 [2024-07-15 17:28:48.412869] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:52.649 17:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:52.649 17:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:52.649 17:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:52.649 17:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:52.649 17:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:52.649 17:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:52.649 17:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:52.649 17:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:52.649 17:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:52.649 17:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:52.649 17:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:52.649 17:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:52.907 17:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:52.907 "name": "raid_bdev1", 00:09:52.907 "uuid": "aeb7ed2c-42cf-11ef-96ac-773515fba644", 00:09:52.907 "strip_size_kb": 64, 00:09:52.907 "state": "configuring", 00:09:52.907 "raid_level": "raid0", 00:09:52.907 "superblock": true, 00:09:52.907 "num_base_bdevs": 3, 00:09:52.907 "num_base_bdevs_discovered": 1, 00:09:52.907 "num_base_bdevs_operational": 3, 00:09:52.907 "base_bdevs_list": [ 00:09:52.907 { 00:09:52.907 "name": "pt1", 00:09:52.907 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:52.907 "is_configured": true, 00:09:52.907 "data_offset": 2048, 00:09:52.907 "data_size": 63488 00:09:52.907 }, 00:09:52.907 { 00:09:52.907 "name": null, 00:09:52.907 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:52.907 "is_configured": false, 00:09:52.907 "data_offset": 2048, 00:09:52.907 "data_size": 63488 00:09:52.907 }, 00:09:52.907 { 00:09:52.907 "name": null, 00:09:52.907 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:52.907 "is_configured": false, 00:09:52.907 "data_offset": 2048, 00:09:52.907 "data_size": 63488 00:09:52.907 } 00:09:52.907 ] 00:09:52.907 }' 00:09:52.907 17:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:52.907 17:28:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.164 17:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:09:53.164 17:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:09:53.164 17:28:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:53.422 [2024-07-15 17:28:49.184887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:53.422 [2024-07-15 17:28:49.184944] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.422 [2024-07-15 17:28:49.184956] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3c3431c35680 00:09:53.422 [2024-07-15 17:28:49.184975] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.422 [2024-07-15 17:28:49.185089] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.422 [2024-07-15 17:28:49.185100] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:53.422 [2024-07-15 17:28:49.185123] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:53.422 [2024-07-15 17:28:49.185132] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:53.422 pt2 00:09:53.422 17:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:09:53.422 17:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:09:53.422 17:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:53.680 [2024-07-15 17:28:49.448884] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:53.680 [2024-07-15 17:28:49.448940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:53.680 [2024-07-15 17:28:49.448952] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3c3431c35400 00:09:53.680 [2024-07-15 17:28:49.448960] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:53.680 [2024-07-15 17:28:49.449076] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:53.680 [2024-07-15 17:28:49.449086] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:53.680 [2024-07-15 17:28:49.449108] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:53.680 [2024-07-15 17:28:49.449117] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:53.680 [2024-07-15 17:28:49.449145] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3c3431c34780 00:09:53.680 [2024-07-15 17:28:49.449149] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:53.680 [2024-07-15 17:28:49.449171] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3c3431c97e20 00:09:53.680 [2024-07-15 17:28:49.449232] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3c3431c34780 00:09:53.681 [2024-07-15 17:28:49.449237] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3c3431c34780 00:09:53.681 [2024-07-15 17:28:49.449258] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:53.681 pt3 00:09:53.681 17:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:09:53.681 17:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:09:53.681 17:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:53.681 17:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:53.681 17:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:53.681 17:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:53.681 17:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:53.681 17:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:53.681 17:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:53.681 17:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:53.681 17:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:53.681 17:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:53.681 17:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:53.681 17:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:53.939 17:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:53.939 "name": "raid_bdev1", 00:09:53.939 "uuid": "aeb7ed2c-42cf-11ef-96ac-773515fba644", 00:09:53.939 "strip_size_kb": 64, 00:09:53.939 "state": "online", 00:09:53.939 "raid_level": "raid0", 00:09:53.939 "superblock": true, 00:09:53.939 "num_base_bdevs": 3, 00:09:53.939 "num_base_bdevs_discovered": 3, 00:09:53.939 "num_base_bdevs_operational": 3, 00:09:53.939 "base_bdevs_list": [ 00:09:53.939 { 00:09:53.939 "name": "pt1", 00:09:53.939 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:53.939 "is_configured": true, 00:09:53.939 "data_offset": 2048, 00:09:53.939 "data_size": 63488 00:09:53.939 }, 00:09:53.939 { 00:09:53.939 "name": "pt2", 00:09:53.939 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:53.939 "is_configured": true, 00:09:53.939 "data_offset": 2048, 00:09:53.939 "data_size": 63488 00:09:53.939 }, 00:09:53.939 { 00:09:53.939 "name": "pt3", 00:09:53.939 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:53.939 "is_configured": true, 00:09:53.939 "data_offset": 2048, 00:09:53.939 "data_size": 63488 00:09:53.939 } 00:09:53.939 ] 00:09:53.939 }' 00:09:53.939 17:28:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:53.939 17:28:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.198 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:09:54.198 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:09:54.198 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:54.198 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:54.198 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:54.198 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:54.198 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:54.198 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:54.457 [2024-07-15 17:28:50.220934] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:54.457 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:54.457 "name": "raid_bdev1", 00:09:54.457 "aliases": [ 00:09:54.457 "aeb7ed2c-42cf-11ef-96ac-773515fba644" 00:09:54.457 ], 00:09:54.457 "product_name": "Raid Volume", 00:09:54.457 "block_size": 512, 00:09:54.457 "num_blocks": 190464, 00:09:54.457 "uuid": "aeb7ed2c-42cf-11ef-96ac-773515fba644", 00:09:54.457 "assigned_rate_limits": { 00:09:54.457 "rw_ios_per_sec": 0, 00:09:54.457 "rw_mbytes_per_sec": 0, 00:09:54.457 "r_mbytes_per_sec": 0, 00:09:54.457 "w_mbytes_per_sec": 0 00:09:54.457 }, 00:09:54.457 "claimed": false, 00:09:54.457 "zoned": false, 00:09:54.457 "supported_io_types": { 00:09:54.457 "read": true, 00:09:54.457 "write": true, 00:09:54.457 "unmap": true, 00:09:54.457 "flush": true, 00:09:54.457 "reset": true, 00:09:54.457 "nvme_admin": false, 00:09:54.457 "nvme_io": false, 00:09:54.457 "nvme_io_md": false, 00:09:54.457 "write_zeroes": true, 00:09:54.457 "zcopy": false, 00:09:54.457 "get_zone_info": false, 00:09:54.457 "zone_management": false, 00:09:54.457 "zone_append": false, 00:09:54.457 "compare": false, 00:09:54.457 "compare_and_write": false, 00:09:54.457 "abort": false, 00:09:54.457 "seek_hole": false, 00:09:54.457 "seek_data": false, 00:09:54.457 "copy": false, 00:09:54.457 "nvme_iov_md": false 00:09:54.457 }, 00:09:54.457 "memory_domains": [ 00:09:54.457 { 00:09:54.457 "dma_device_id": "system", 00:09:54.457 "dma_device_type": 1 00:09:54.457 }, 00:09:54.457 { 00:09:54.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.457 "dma_device_type": 2 00:09:54.457 }, 00:09:54.457 { 00:09:54.457 "dma_device_id": "system", 00:09:54.457 "dma_device_type": 1 00:09:54.457 }, 00:09:54.457 { 00:09:54.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.457 "dma_device_type": 2 00:09:54.457 }, 00:09:54.457 { 00:09:54.457 "dma_device_id": "system", 00:09:54.457 "dma_device_type": 1 00:09:54.457 }, 00:09:54.457 { 00:09:54.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.457 "dma_device_type": 2 00:09:54.457 } 00:09:54.457 ], 00:09:54.457 "driver_specific": { 00:09:54.457 "raid": { 00:09:54.457 "uuid": "aeb7ed2c-42cf-11ef-96ac-773515fba644", 00:09:54.457 "strip_size_kb": 64, 00:09:54.457 "state": "online", 00:09:54.457 "raid_level": "raid0", 00:09:54.457 "superblock": true, 00:09:54.457 "num_base_bdevs": 3, 00:09:54.457 "num_base_bdevs_discovered": 3, 00:09:54.457 "num_base_bdevs_operational": 3, 00:09:54.457 "base_bdevs_list": [ 00:09:54.457 { 00:09:54.457 "name": "pt1", 00:09:54.457 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:54.457 "is_configured": true, 00:09:54.457 "data_offset": 2048, 00:09:54.457 "data_size": 63488 00:09:54.457 }, 00:09:54.457 { 00:09:54.457 "name": "pt2", 00:09:54.457 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:54.457 "is_configured": true, 00:09:54.457 "data_offset": 2048, 00:09:54.457 "data_size": 63488 00:09:54.457 }, 00:09:54.457 { 00:09:54.457 "name": "pt3", 00:09:54.457 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:54.457 "is_configured": true, 00:09:54.457 "data_offset": 2048, 00:09:54.457 "data_size": 63488 00:09:54.457 } 00:09:54.457 ] 00:09:54.457 } 00:09:54.457 } 00:09:54.457 }' 00:09:54.457 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:54.457 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:09:54.457 pt2 00:09:54.457 pt3' 00:09:54.457 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:54.457 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:09:54.457 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:54.716 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:54.716 "name": "pt1", 00:09:54.716 "aliases": [ 00:09:54.716 "00000000-0000-0000-0000-000000000001" 00:09:54.716 ], 00:09:54.716 "product_name": "passthru", 00:09:54.716 "block_size": 512, 00:09:54.716 "num_blocks": 65536, 00:09:54.716 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:54.716 "assigned_rate_limits": { 00:09:54.716 "rw_ios_per_sec": 0, 00:09:54.716 "rw_mbytes_per_sec": 0, 00:09:54.716 "r_mbytes_per_sec": 0, 00:09:54.716 "w_mbytes_per_sec": 0 00:09:54.717 }, 00:09:54.717 "claimed": true, 00:09:54.717 "claim_type": "exclusive_write", 00:09:54.717 "zoned": false, 00:09:54.717 "supported_io_types": { 00:09:54.717 "read": true, 00:09:54.717 "write": true, 00:09:54.717 "unmap": true, 00:09:54.717 "flush": true, 00:09:54.717 "reset": true, 00:09:54.717 "nvme_admin": false, 00:09:54.717 "nvme_io": false, 00:09:54.717 "nvme_io_md": false, 00:09:54.717 "write_zeroes": true, 00:09:54.717 "zcopy": true, 00:09:54.717 "get_zone_info": false, 00:09:54.717 "zone_management": false, 00:09:54.717 "zone_append": false, 00:09:54.717 "compare": false, 00:09:54.717 "compare_and_write": false, 00:09:54.717 "abort": true, 00:09:54.717 "seek_hole": false, 00:09:54.717 "seek_data": false, 00:09:54.717 "copy": true, 00:09:54.717 "nvme_iov_md": false 00:09:54.717 }, 00:09:54.717 "memory_domains": [ 00:09:54.717 { 00:09:54.717 "dma_device_id": "system", 00:09:54.717 "dma_device_type": 1 00:09:54.717 }, 00:09:54.717 { 00:09:54.717 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.717 "dma_device_type": 2 00:09:54.717 } 00:09:54.717 ], 00:09:54.717 "driver_specific": { 00:09:54.717 "passthru": { 00:09:54.717 "name": "pt1", 00:09:54.717 "base_bdev_name": "malloc1" 00:09:54.717 } 00:09:54.717 } 00:09:54.717 }' 00:09:54.717 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:54.717 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:54.717 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:54.717 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:54.717 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:54.717 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:54.717 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:54.717 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:54.717 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:54.717 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:54.717 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:54.717 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:54.717 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:54.717 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:09:54.717 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:54.974 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:54.974 "name": "pt2", 00:09:54.974 "aliases": [ 00:09:54.974 "00000000-0000-0000-0000-000000000002" 00:09:54.974 ], 00:09:54.974 "product_name": "passthru", 00:09:54.974 "block_size": 512, 00:09:54.975 "num_blocks": 65536, 00:09:54.975 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:54.975 "assigned_rate_limits": { 00:09:54.975 "rw_ios_per_sec": 0, 00:09:54.975 "rw_mbytes_per_sec": 0, 00:09:54.975 "r_mbytes_per_sec": 0, 00:09:54.975 "w_mbytes_per_sec": 0 00:09:54.975 }, 00:09:54.975 "claimed": true, 00:09:54.975 "claim_type": "exclusive_write", 00:09:54.975 "zoned": false, 00:09:54.975 "supported_io_types": { 00:09:54.975 "read": true, 00:09:54.975 "write": true, 00:09:54.975 "unmap": true, 00:09:54.975 "flush": true, 00:09:54.975 "reset": true, 00:09:54.975 "nvme_admin": false, 00:09:54.975 "nvme_io": false, 00:09:54.975 "nvme_io_md": false, 00:09:54.975 "write_zeroes": true, 00:09:54.975 "zcopy": true, 00:09:54.975 "get_zone_info": false, 00:09:54.975 "zone_management": false, 00:09:54.975 "zone_append": false, 00:09:54.975 "compare": false, 00:09:54.975 "compare_and_write": false, 00:09:54.975 "abort": true, 00:09:54.975 "seek_hole": false, 00:09:54.975 "seek_data": false, 00:09:54.975 "copy": true, 00:09:54.975 "nvme_iov_md": false 00:09:54.975 }, 00:09:54.975 "memory_domains": [ 00:09:54.975 { 00:09:54.975 "dma_device_id": "system", 00:09:54.975 "dma_device_type": 1 00:09:54.975 }, 00:09:54.975 { 00:09:54.975 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.975 "dma_device_type": 2 00:09:54.975 } 00:09:54.975 ], 00:09:54.975 "driver_specific": { 00:09:54.975 "passthru": { 00:09:54.975 "name": "pt2", 00:09:54.975 "base_bdev_name": "malloc2" 00:09:54.975 } 00:09:54.975 } 00:09:54.975 }' 00:09:54.975 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:54.975 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:54.975 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:54.975 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:54.975 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:54.975 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:55.232 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:55.232 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:55.232 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:55.232 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:55.233 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:55.233 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:55.233 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:55.233 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:09:55.233 17:28:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:55.489 17:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:55.489 "name": "pt3", 00:09:55.489 "aliases": [ 00:09:55.489 "00000000-0000-0000-0000-000000000003" 00:09:55.489 ], 00:09:55.489 "product_name": "passthru", 00:09:55.489 "block_size": 512, 00:09:55.489 "num_blocks": 65536, 00:09:55.489 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:55.489 "assigned_rate_limits": { 00:09:55.489 "rw_ios_per_sec": 0, 00:09:55.489 "rw_mbytes_per_sec": 0, 00:09:55.489 "r_mbytes_per_sec": 0, 00:09:55.489 "w_mbytes_per_sec": 0 00:09:55.489 }, 00:09:55.489 "claimed": true, 00:09:55.489 "claim_type": "exclusive_write", 00:09:55.489 "zoned": false, 00:09:55.489 "supported_io_types": { 00:09:55.489 "read": true, 00:09:55.489 "write": true, 00:09:55.489 "unmap": true, 00:09:55.489 "flush": true, 00:09:55.489 "reset": true, 00:09:55.489 "nvme_admin": false, 00:09:55.489 "nvme_io": false, 00:09:55.489 "nvme_io_md": false, 00:09:55.489 "write_zeroes": true, 00:09:55.489 "zcopy": true, 00:09:55.489 "get_zone_info": false, 00:09:55.489 "zone_management": false, 00:09:55.489 "zone_append": false, 00:09:55.489 "compare": false, 00:09:55.489 "compare_and_write": false, 00:09:55.489 "abort": true, 00:09:55.489 "seek_hole": false, 00:09:55.489 "seek_data": false, 00:09:55.489 "copy": true, 00:09:55.489 "nvme_iov_md": false 00:09:55.489 }, 00:09:55.489 "memory_domains": [ 00:09:55.489 { 00:09:55.489 "dma_device_id": "system", 00:09:55.489 "dma_device_type": 1 00:09:55.489 }, 00:09:55.489 { 00:09:55.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.489 "dma_device_type": 2 00:09:55.489 } 00:09:55.489 ], 00:09:55.489 "driver_specific": { 00:09:55.489 "passthru": { 00:09:55.489 "name": "pt3", 00:09:55.489 "base_bdev_name": "malloc3" 00:09:55.489 } 00:09:55.489 } 00:09:55.489 }' 00:09:55.489 17:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:55.489 17:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:55.489 17:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:55.489 17:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:55.489 17:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:55.490 17:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:55.490 17:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:55.490 17:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:55.490 17:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:55.490 17:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:55.490 17:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:55.490 17:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:55.490 17:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:55.490 17:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:09:55.747 [2024-07-15 17:28:51.440957] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:55.747 17:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' aeb7ed2c-42cf-11ef-96ac-773515fba644 '!=' aeb7ed2c-42cf-11ef-96ac-773515fba644 ']' 00:09:55.747 17:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:09:55.747 17:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:55.747 17:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:09:55.747 17:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 53405 00:09:55.747 17:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 53405 ']' 00:09:55.747 17:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 53405 00:09:55.747 17:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:09:55.747 17:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:09:55.747 17:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 53405 00:09:55.747 17:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:09:55.747 17:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:09:55.747 17:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:09:55.747 killing process with pid 53405 00:09:55.747 17:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 53405' 00:09:55.747 17:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 53405 00:09:55.747 [2024-07-15 17:28:51.471040] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:55.747 [2024-07-15 17:28:51.471066] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:55.747 [2024-07-15 17:28:51.471080] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:55.747 [2024-07-15 17:28:51.471085] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3c3431c34780 name raid_bdev1, state offline 00:09:55.747 17:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 53405 00:09:55.747 [2024-07-15 17:28:51.488289] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:56.005 17:28:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:09:56.005 00:09:56.005 real 0m11.701s 00:09:56.005 user 0m20.738s 00:09:56.005 sys 0m1.877s 00:09:56.005 17:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:56.005 17:28:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.005 ************************************ 00:09:56.005 END TEST raid_superblock_test 00:09:56.005 ************************************ 00:09:56.005 17:28:51 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:56.005 17:28:51 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:09:56.005 17:28:51 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:56.005 17:28:51 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:56.005 17:28:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:56.005 ************************************ 00:09:56.005 START TEST raid_read_error_test 00:09:56.005 ************************************ 00:09:56.005 17:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 read 00:09:56.005 17:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:09:56.005 17:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:09:56.005 17:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:09:56.005 17:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:09:56.005 17:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:56.005 17:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:09:56.005 17:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:09:56.005 17:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:56.005 17:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:09:56.005 17:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:09:56.005 17:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:56.005 17:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:09:56.005 17:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:09:56.005 17:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:56.005 17:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:56.005 17:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:09:56.005 17:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:09:56.005 17:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:09:56.005 17:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:09:56.005 17:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:09:56.005 17:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:09:56.005 17:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:09:56.005 17:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:09:56.005 17:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:09:56.005 17:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:09:56.005 17:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.SIYWqagipu 00:09:56.005 17:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=53760 00:09:56.005 17:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 53760 /var/tmp/spdk-raid.sock 00:09:56.005 17:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 53760 ']' 00:09:56.005 17:28:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:56.005 17:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:56.005 17:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:56.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:56.005 17:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:56.005 17:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:56.005 17:28:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.005 [2024-07-15 17:28:51.724167] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:09:56.005 [2024-07-15 17:28:51.724442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:09:56.572 EAL: TSC is not safe to use in SMP mode 00:09:56.572 EAL: TSC is not invariant 00:09:56.572 [2024-07-15 17:28:52.244602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.572 [2024-07-15 17:28:52.332772] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:56.572 [2024-07-15 17:28:52.334840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.572 [2024-07-15 17:28:52.335588] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:56.572 [2024-07-15 17:28:52.335603] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:57.137 17:28:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:57.137 17:28:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:09:57.137 17:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:09:57.137 17:28:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:57.394 BaseBdev1_malloc 00:09:57.394 17:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:09:57.651 true 00:09:57.651 17:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:57.908 [2024-07-15 17:28:53.563910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:57.908 [2024-07-15 17:28:53.563980] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:57.908 [2024-07-15 17:28:53.564008] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2619bd034780 00:09:57.908 [2024-07-15 17:28:53.564017] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:57.908 [2024-07-15 17:28:53.564696] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:57.908 [2024-07-15 17:28:53.564722] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:57.908 BaseBdev1 00:09:57.908 17:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:09:57.908 17:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:58.166 BaseBdev2_malloc 00:09:58.166 17:28:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:09:58.423 true 00:09:58.423 17:28:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:58.681 [2024-07-15 17:28:54.275917] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:58.681 [2024-07-15 17:28:54.275975] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:58.681 [2024-07-15 17:28:54.276003] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2619bd034c80 00:09:58.681 [2024-07-15 17:28:54.276012] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:58.681 [2024-07-15 17:28:54.276687] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:58.682 [2024-07-15 17:28:54.276714] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:58.682 BaseBdev2 00:09:58.682 17:28:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:09:58.682 17:28:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:58.940 BaseBdev3_malloc 00:09:58.940 17:28:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:09:59.198 true 00:09:59.198 17:28:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:59.456 [2024-07-15 17:28:55.115927] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:59.456 [2024-07-15 17:28:55.115981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:59.456 [2024-07-15 17:28:55.116008] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2619bd035180 00:09:59.456 [2024-07-15 17:28:55.116017] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:59.456 [2024-07-15 17:28:55.116711] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:59.456 [2024-07-15 17:28:55.116744] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:59.456 BaseBdev3 00:09:59.456 17:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:09:59.714 [2024-07-15 17:28:55.363950] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.714 [2024-07-15 17:28:55.364552] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:59.714 [2024-07-15 17:28:55.364577] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:59.714 [2024-07-15 17:28:55.364636] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2619bd035400 00:09:59.714 [2024-07-15 17:28:55.364642] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:59.714 [2024-07-15 17:28:55.364681] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2619bd0a0e20 00:09:59.714 [2024-07-15 17:28:55.364761] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2619bd035400 00:09:59.714 [2024-07-15 17:28:55.364766] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2619bd035400 00:09:59.714 [2024-07-15 17:28:55.364795] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:59.714 17:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:59.714 17:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:59.714 17:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:59.714 17:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:59.714 17:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:59.714 17:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:59.714 17:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:59.714 17:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:59.714 17:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:59.714 17:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:59.714 17:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:59.715 17:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:59.972 17:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:59.972 "name": "raid_bdev1", 00:09:59.972 "uuid": "b629ede4-42cf-11ef-96ac-773515fba644", 00:09:59.972 "strip_size_kb": 64, 00:09:59.972 "state": "online", 00:09:59.972 "raid_level": "raid0", 00:09:59.972 "superblock": true, 00:09:59.972 "num_base_bdevs": 3, 00:09:59.972 "num_base_bdevs_discovered": 3, 00:09:59.972 "num_base_bdevs_operational": 3, 00:09:59.972 "base_bdevs_list": [ 00:09:59.972 { 00:09:59.972 "name": "BaseBdev1", 00:09:59.972 "uuid": "67c6d7a6-3cf9-3056-8743-7d6dd5d1051e", 00:09:59.972 "is_configured": true, 00:09:59.972 "data_offset": 2048, 00:09:59.972 "data_size": 63488 00:09:59.972 }, 00:09:59.972 { 00:09:59.972 "name": "BaseBdev2", 00:09:59.972 "uuid": "dfd4b2ca-8b14-7051-b09f-f2b5da164255", 00:09:59.972 "is_configured": true, 00:09:59.972 "data_offset": 2048, 00:09:59.972 "data_size": 63488 00:09:59.972 }, 00:09:59.972 { 00:09:59.972 "name": "BaseBdev3", 00:09:59.972 "uuid": "658226ee-667a-dc5b-95eb-8857834c765a", 00:09:59.972 "is_configured": true, 00:09:59.972 "data_offset": 2048, 00:09:59.972 "data_size": 63488 00:09:59.972 } 00:09:59.972 ] 00:09:59.972 }' 00:09:59.972 17:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:59.972 17:28:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.230 17:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:00.230 17:28:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:00.230 [2024-07-15 17:28:56.056187] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2619bd0a0ec0 00:10:01.167 17:28:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:01.425 17:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:01.425 17:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:01.425 17:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:10:01.425 17:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:01.425 17:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:01.425 17:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:01.425 17:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:01.425 17:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:01.425 17:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:01.425 17:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:01.425 17:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:01.425 17:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:01.425 17:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:01.425 17:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:01.425 17:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:01.683 17:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:01.684 "name": "raid_bdev1", 00:10:01.684 "uuid": "b629ede4-42cf-11ef-96ac-773515fba644", 00:10:01.684 "strip_size_kb": 64, 00:10:01.684 "state": "online", 00:10:01.684 "raid_level": "raid0", 00:10:01.684 "superblock": true, 00:10:01.684 "num_base_bdevs": 3, 00:10:01.684 "num_base_bdevs_discovered": 3, 00:10:01.684 "num_base_bdevs_operational": 3, 00:10:01.684 "base_bdevs_list": [ 00:10:01.684 { 00:10:01.684 "name": "BaseBdev1", 00:10:01.684 "uuid": "67c6d7a6-3cf9-3056-8743-7d6dd5d1051e", 00:10:01.684 "is_configured": true, 00:10:01.684 "data_offset": 2048, 00:10:01.684 "data_size": 63488 00:10:01.684 }, 00:10:01.684 { 00:10:01.684 "name": "BaseBdev2", 00:10:01.684 "uuid": "dfd4b2ca-8b14-7051-b09f-f2b5da164255", 00:10:01.684 "is_configured": true, 00:10:01.684 "data_offset": 2048, 00:10:01.684 "data_size": 63488 00:10:01.684 }, 00:10:01.684 { 00:10:01.684 "name": "BaseBdev3", 00:10:01.684 "uuid": "658226ee-667a-dc5b-95eb-8857834c765a", 00:10:01.684 "is_configured": true, 00:10:01.684 "data_offset": 2048, 00:10:01.684 "data_size": 63488 00:10:01.684 } 00:10:01.684 ] 00:10:01.684 }' 00:10:01.684 17:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:01.684 17:28:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.250 17:28:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:02.508 [2024-07-15 17:28:58.098668] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:02.508 [2024-07-15 17:28:58.098699] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:02.508 [2024-07-15 17:28:58.099048] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.508 [2024-07-15 17:28:58.099060] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:02.508 [2024-07-15 17:28:58.099067] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:02.508 [2024-07-15 17:28:58.099072] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2619bd035400 name raid_bdev1, state offline 00:10:02.508 0 00:10:02.508 17:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 53760 00:10:02.508 17:28:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 53760 ']' 00:10:02.508 17:28:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 53760 00:10:02.508 17:28:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:10:02.508 17:28:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:10:02.508 17:28:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 53760 00:10:02.508 17:28:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:10:02.508 17:28:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:10:02.508 17:28:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:10:02.508 killing process with pid 53760 00:10:02.508 17:28:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 53760' 00:10:02.508 17:28:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 53760 00:10:02.508 [2024-07-15 17:28:58.125230] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:02.508 17:28:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 53760 00:10:02.508 [2024-07-15 17:28:58.142473] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:02.509 17:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:02.509 17:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.SIYWqagipu 00:10:02.509 17:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:02.509 17:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.49 00:10:02.509 17:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:10:02.509 17:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:02.509 17:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:02.509 17:28:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.49 != \0\.\0\0 ]] 00:10:02.509 00:10:02.509 real 0m6.620s 00:10:02.509 user 0m10.502s 00:10:02.509 sys 0m1.033s 00:10:02.509 17:28:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:02.509 ************************************ 00:10:02.509 END TEST raid_read_error_test 00:10:02.509 ************************************ 00:10:02.509 17:28:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.767 17:28:58 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:02.767 17:28:58 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:10:02.767 17:28:58 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:02.767 17:28:58 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:02.767 17:28:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:02.767 ************************************ 00:10:02.767 START TEST raid_write_error_test 00:10:02.767 ************************************ 00:10:02.767 17:28:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 write 00:10:02.767 17:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:10:02.767 17:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:10:02.767 17:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:10:02.767 17:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:02.767 17:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:02.767 17:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:02.767 17:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:02.767 17:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:02.767 17:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:02.767 17:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:02.767 17:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:02.767 17:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:10:02.767 17:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:02.767 17:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:02.767 17:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:02.767 17:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:02.768 17:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:02.768 17:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:02.768 17:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:02.768 17:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:02.768 17:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:02.768 17:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:10:02.768 17:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:10:02.768 17:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:10:02.768 17:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:02.768 17:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.MwvJmpziMm 00:10:02.768 17:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=53891 00:10:02.768 17:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 53891 /var/tmp/spdk-raid.sock 00:10:02.768 17:28:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 53891 ']' 00:10:02.768 17:28:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:02.768 17:28:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:02.768 17:28:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:02.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:02.768 17:28:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:02.768 17:28:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:02.768 17:28:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.768 [2024-07-15 17:28:58.380801] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:10:02.768 [2024-07-15 17:28:58.380941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:10:03.335 EAL: TSC is not safe to use in SMP mode 00:10:03.335 EAL: TSC is not invariant 00:10:03.335 [2024-07-15 17:28:58.898513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.335 [2024-07-15 17:28:58.986905] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:03.335 [2024-07-15 17:28:58.989080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.335 [2024-07-15 17:28:58.989862] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.335 [2024-07-15 17:28:58.989874] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:03.594 17:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:03.594 17:28:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:10:03.594 17:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:03.594 17:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:03.852 BaseBdev1_malloc 00:10:03.852 17:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:04.111 true 00:10:04.111 17:28:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:04.369 [2024-07-15 17:29:00.114181] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:04.369 [2024-07-15 17:29:00.114238] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:04.369 [2024-07-15 17:29:00.114267] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x161eb8034780 00:10:04.369 [2024-07-15 17:29:00.114276] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:04.369 [2024-07-15 17:29:00.114959] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:04.369 [2024-07-15 17:29:00.114981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:04.369 BaseBdev1 00:10:04.369 17:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:04.369 17:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:04.627 BaseBdev2_malloc 00:10:04.627 17:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:04.885 true 00:10:04.885 17:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:05.148 [2024-07-15 17:29:00.862201] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:05.148 [2024-07-15 17:29:00.862257] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.148 [2024-07-15 17:29:00.862284] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x161eb8034c80 00:10:05.148 [2024-07-15 17:29:00.862293] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.148 [2024-07-15 17:29:00.862946] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.148 [2024-07-15 17:29:00.862973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:05.148 BaseBdev2 00:10:05.148 17:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:05.148 17:29:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:05.406 BaseBdev3_malloc 00:10:05.406 17:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:10:05.663 true 00:10:05.663 17:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:05.921 [2024-07-15 17:29:01.586216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:05.921 [2024-07-15 17:29:01.586275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.921 [2024-07-15 17:29:01.586302] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x161eb8035180 00:10:05.921 [2024-07-15 17:29:01.586311] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.921 [2024-07-15 17:29:01.586959] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.921 [2024-07-15 17:29:01.586985] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:05.921 BaseBdev3 00:10:05.921 17:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:10:06.179 [2024-07-15 17:29:01.874225] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:06.179 [2024-07-15 17:29:01.874831] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:06.179 [2024-07-15 17:29:01.874856] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:06.179 [2024-07-15 17:29:01.874914] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x161eb8035400 00:10:06.179 [2024-07-15 17:29:01.874920] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:06.179 [2024-07-15 17:29:01.874957] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x161eb80a0e20 00:10:06.179 [2024-07-15 17:29:01.875029] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x161eb8035400 00:10:06.179 [2024-07-15 17:29:01.875033] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x161eb8035400 00:10:06.179 [2024-07-15 17:29:01.875060] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:06.179 17:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:06.179 17:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:06.179 17:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:06.179 17:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:06.179 17:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:06.179 17:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:06.179 17:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:06.179 17:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:06.179 17:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:06.179 17:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:06.179 17:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:06.179 17:29:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:06.437 17:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:06.437 "name": "raid_bdev1", 00:10:06.437 "uuid": "ba0b5193-42cf-11ef-96ac-773515fba644", 00:10:06.437 "strip_size_kb": 64, 00:10:06.437 "state": "online", 00:10:06.437 "raid_level": "raid0", 00:10:06.437 "superblock": true, 00:10:06.437 "num_base_bdevs": 3, 00:10:06.437 "num_base_bdevs_discovered": 3, 00:10:06.437 "num_base_bdevs_operational": 3, 00:10:06.437 "base_bdevs_list": [ 00:10:06.437 { 00:10:06.437 "name": "BaseBdev1", 00:10:06.437 "uuid": "43ef121c-a032-b554-b5c7-ddcfa1f87d24", 00:10:06.437 "is_configured": true, 00:10:06.437 "data_offset": 2048, 00:10:06.437 "data_size": 63488 00:10:06.437 }, 00:10:06.437 { 00:10:06.437 "name": "BaseBdev2", 00:10:06.437 "uuid": "1eff54d5-9a98-0c53-853c-079d15f96336", 00:10:06.437 "is_configured": true, 00:10:06.437 "data_offset": 2048, 00:10:06.437 "data_size": 63488 00:10:06.437 }, 00:10:06.437 { 00:10:06.437 "name": "BaseBdev3", 00:10:06.437 "uuid": "46d57959-05ef-d152-a223-38077b7d211c", 00:10:06.437 "is_configured": true, 00:10:06.437 "data_offset": 2048, 00:10:06.437 "data_size": 63488 00:10:06.437 } 00:10:06.437 ] 00:10:06.437 }' 00:10:06.437 17:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:06.437 17:29:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.694 17:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:06.694 17:29:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:06.950 [2024-07-15 17:29:02.550412] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x161eb80a0ec0 00:10:07.882 17:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:08.139 17:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:08.139 17:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:08.139 17:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:10:08.139 17:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:08.139 17:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:08.139 17:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:08.139 17:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:08.139 17:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:08.139 17:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:08.139 17:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:08.139 17:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:08.139 17:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:08.139 17:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:08.139 17:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:08.140 17:29:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:08.398 17:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:08.398 "name": "raid_bdev1", 00:10:08.398 "uuid": "ba0b5193-42cf-11ef-96ac-773515fba644", 00:10:08.398 "strip_size_kb": 64, 00:10:08.398 "state": "online", 00:10:08.398 "raid_level": "raid0", 00:10:08.398 "superblock": true, 00:10:08.398 "num_base_bdevs": 3, 00:10:08.398 "num_base_bdevs_discovered": 3, 00:10:08.398 "num_base_bdevs_operational": 3, 00:10:08.398 "base_bdevs_list": [ 00:10:08.398 { 00:10:08.398 "name": "BaseBdev1", 00:10:08.398 "uuid": "43ef121c-a032-b554-b5c7-ddcfa1f87d24", 00:10:08.398 "is_configured": true, 00:10:08.398 "data_offset": 2048, 00:10:08.398 "data_size": 63488 00:10:08.398 }, 00:10:08.398 { 00:10:08.398 "name": "BaseBdev2", 00:10:08.398 "uuid": "1eff54d5-9a98-0c53-853c-079d15f96336", 00:10:08.398 "is_configured": true, 00:10:08.398 "data_offset": 2048, 00:10:08.398 "data_size": 63488 00:10:08.398 }, 00:10:08.398 { 00:10:08.398 "name": "BaseBdev3", 00:10:08.398 "uuid": "46d57959-05ef-d152-a223-38077b7d211c", 00:10:08.398 "is_configured": true, 00:10:08.398 "data_offset": 2048, 00:10:08.398 "data_size": 63488 00:10:08.398 } 00:10:08.398 ] 00:10:08.398 }' 00:10:08.398 17:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:08.398 17:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.655 17:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:08.914 [2024-07-15 17:29:04.592347] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:08.914 [2024-07-15 17:29:04.592376] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:08.914 [2024-07-15 17:29:04.592741] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:08.914 [2024-07-15 17:29:04.592751] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.914 [2024-07-15 17:29:04.592759] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:08.914 [2024-07-15 17:29:04.592764] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x161eb8035400 name raid_bdev1, state offline 00:10:08.914 0 00:10:08.914 17:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 53891 00:10:08.914 17:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 53891 ']' 00:10:08.914 17:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 53891 00:10:08.914 17:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:10:08.914 17:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:10:08.914 17:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 53891 00:10:08.914 17:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:10:08.914 17:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:10:08.914 killing process with pid 53891 00:10:08.914 17:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:10:08.914 17:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 53891' 00:10:08.914 17:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 53891 00:10:08.914 [2024-07-15 17:29:04.621224] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:08.914 17:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 53891 00:10:08.914 [2024-07-15 17:29:04.638521] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:09.172 17:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.MwvJmpziMm 00:10:09.172 17:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:09.172 17:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:09.172 17:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.49 00:10:09.172 17:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:10:09.172 17:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:09.172 17:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:09.172 17:29:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.49 != \0\.\0\0 ]] 00:10:09.172 00:10:09.172 real 0m6.455s 00:10:09.172 user 0m10.136s 00:10:09.172 sys 0m1.021s 00:10:09.172 17:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:09.172 ************************************ 00:10:09.172 END TEST raid_write_error_test 00:10:09.172 ************************************ 00:10:09.172 17:29:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.172 17:29:04 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:09.172 17:29:04 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:10:09.172 17:29:04 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:10:09.172 17:29:04 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:09.172 17:29:04 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:09.172 17:29:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:09.172 ************************************ 00:10:09.172 START TEST raid_state_function_test 00:10:09.172 ************************************ 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 false 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=54020 00:10:09.172 Process raid pid: 54020 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 54020' 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 54020 /var/tmp/spdk-raid.sock 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 54020 ']' 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:09.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:09.172 17:29:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.172 [2024-07-15 17:29:04.871763] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:10:09.172 [2024-07-15 17:29:04.871979] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:10:09.739 EAL: TSC is not safe to use in SMP mode 00:10:09.739 EAL: TSC is not invariant 00:10:09.739 [2024-07-15 17:29:05.409164] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:09.739 [2024-07-15 17:29:05.500574] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:09.739 [2024-07-15 17:29:05.502698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.739 [2024-07-15 17:29:05.503471] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:09.739 [2024-07-15 17:29:05.503487] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:10.304 17:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:10.304 17:29:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:10:10.304 17:29:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:10.562 [2024-07-15 17:29:06.244106] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:10.562 [2024-07-15 17:29:06.244163] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:10.562 [2024-07-15 17:29:06.244169] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:10.562 [2024-07-15 17:29:06.244177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:10.562 [2024-07-15 17:29:06.244181] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:10.562 [2024-07-15 17:29:06.244189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:10.562 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:10.562 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:10.562 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:10.562 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:10.562 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:10.562 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:10.562 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:10.562 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:10.562 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:10.562 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:10.562 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:10.562 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.819 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:10.819 "name": "Existed_Raid", 00:10:10.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.819 "strip_size_kb": 64, 00:10:10.819 "state": "configuring", 00:10:10.819 "raid_level": "concat", 00:10:10.819 "superblock": false, 00:10:10.819 "num_base_bdevs": 3, 00:10:10.819 "num_base_bdevs_discovered": 0, 00:10:10.819 "num_base_bdevs_operational": 3, 00:10:10.819 "base_bdevs_list": [ 00:10:10.819 { 00:10:10.819 "name": "BaseBdev1", 00:10:10.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.819 "is_configured": false, 00:10:10.819 "data_offset": 0, 00:10:10.819 "data_size": 0 00:10:10.819 }, 00:10:10.819 { 00:10:10.819 "name": "BaseBdev2", 00:10:10.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.819 "is_configured": false, 00:10:10.819 "data_offset": 0, 00:10:10.819 "data_size": 0 00:10:10.819 }, 00:10:10.819 { 00:10:10.819 "name": "BaseBdev3", 00:10:10.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.819 "is_configured": false, 00:10:10.819 "data_offset": 0, 00:10:10.819 "data_size": 0 00:10:10.819 } 00:10:10.819 ] 00:10:10.819 }' 00:10:10.819 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:10.819 17:29:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.076 17:29:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:11.334 [2024-07-15 17:29:07.112116] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:11.334 [2024-07-15 17:29:07.112147] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x101770a34500 name Existed_Raid, state configuring 00:10:11.334 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:11.591 [2024-07-15 17:29:07.392122] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:11.591 [2024-07-15 17:29:07.392181] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:11.591 [2024-07-15 17:29:07.392187] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:11.591 [2024-07-15 17:29:07.392195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:11.591 [2024-07-15 17:29:07.392199] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:11.591 [2024-07-15 17:29:07.392206] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:11.591 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:12.209 [2024-07-15 17:29:07.681249] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:12.209 BaseBdev1 00:10:12.209 17:29:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:12.209 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:10:12.209 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:12.209 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:12.209 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:12.209 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:12.209 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:12.209 17:29:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:12.466 [ 00:10:12.466 { 00:10:12.466 "name": "BaseBdev1", 00:10:12.466 "aliases": [ 00:10:12.466 "bd813b29-42cf-11ef-96ac-773515fba644" 00:10:12.466 ], 00:10:12.466 "product_name": "Malloc disk", 00:10:12.466 "block_size": 512, 00:10:12.466 "num_blocks": 65536, 00:10:12.466 "uuid": "bd813b29-42cf-11ef-96ac-773515fba644", 00:10:12.466 "assigned_rate_limits": { 00:10:12.466 "rw_ios_per_sec": 0, 00:10:12.466 "rw_mbytes_per_sec": 0, 00:10:12.466 "r_mbytes_per_sec": 0, 00:10:12.466 "w_mbytes_per_sec": 0 00:10:12.466 }, 00:10:12.466 "claimed": true, 00:10:12.466 "claim_type": "exclusive_write", 00:10:12.466 "zoned": false, 00:10:12.466 "supported_io_types": { 00:10:12.466 "read": true, 00:10:12.466 "write": true, 00:10:12.466 "unmap": true, 00:10:12.466 "flush": true, 00:10:12.466 "reset": true, 00:10:12.466 "nvme_admin": false, 00:10:12.466 "nvme_io": false, 00:10:12.466 "nvme_io_md": false, 00:10:12.466 "write_zeroes": true, 00:10:12.466 "zcopy": true, 00:10:12.466 "get_zone_info": false, 00:10:12.466 "zone_management": false, 00:10:12.466 "zone_append": false, 00:10:12.466 "compare": false, 00:10:12.466 "compare_and_write": false, 00:10:12.466 "abort": true, 00:10:12.466 "seek_hole": false, 00:10:12.466 "seek_data": false, 00:10:12.466 "copy": true, 00:10:12.466 "nvme_iov_md": false 00:10:12.466 }, 00:10:12.466 "memory_domains": [ 00:10:12.466 { 00:10:12.466 "dma_device_id": "system", 00:10:12.466 "dma_device_type": 1 00:10:12.466 }, 00:10:12.466 { 00:10:12.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:12.466 "dma_device_type": 2 00:10:12.466 } 00:10:12.466 ], 00:10:12.466 "driver_specific": {} 00:10:12.466 } 00:10:12.466 ] 00:10:12.466 17:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:12.466 17:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:12.467 17:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:12.467 17:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:12.467 17:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:12.467 17:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:12.467 17:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:12.467 17:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:12.467 17:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:12.467 17:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:12.467 17:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:12.467 17:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:12.467 17:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.724 17:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:12.724 "name": "Existed_Raid", 00:10:12.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.724 "strip_size_kb": 64, 00:10:12.724 "state": "configuring", 00:10:12.724 "raid_level": "concat", 00:10:12.724 "superblock": false, 00:10:12.724 "num_base_bdevs": 3, 00:10:12.724 "num_base_bdevs_discovered": 1, 00:10:12.724 "num_base_bdevs_operational": 3, 00:10:12.724 "base_bdevs_list": [ 00:10:12.724 { 00:10:12.724 "name": "BaseBdev1", 00:10:12.724 "uuid": "bd813b29-42cf-11ef-96ac-773515fba644", 00:10:12.724 "is_configured": true, 00:10:12.724 "data_offset": 0, 00:10:12.724 "data_size": 65536 00:10:12.724 }, 00:10:12.724 { 00:10:12.724 "name": "BaseBdev2", 00:10:12.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.724 "is_configured": false, 00:10:12.724 "data_offset": 0, 00:10:12.724 "data_size": 0 00:10:12.724 }, 00:10:12.724 { 00:10:12.724 "name": "BaseBdev3", 00:10:12.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:12.724 "is_configured": false, 00:10:12.724 "data_offset": 0, 00:10:12.724 "data_size": 0 00:10:12.724 } 00:10:12.724 ] 00:10:12.724 }' 00:10:12.724 17:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:12.724 17:29:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.981 17:29:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:13.240 [2024-07-15 17:29:09.024133] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:13.240 [2024-07-15 17:29:09.024171] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x101770a34500 name Existed_Raid, state configuring 00:10:13.240 17:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:13.498 [2024-07-15 17:29:09.268150] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:13.498 [2024-07-15 17:29:09.268982] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:13.498 [2024-07-15 17:29:09.269023] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:13.498 [2024-07-15 17:29:09.269029] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:13.498 [2024-07-15 17:29:09.269039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:13.498 17:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:13.498 17:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:13.498 17:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:13.498 17:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:13.498 17:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:13.498 17:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:13.498 17:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:13.498 17:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:13.498 17:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:13.498 17:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:13.498 17:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:13.498 17:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:13.498 17:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.498 17:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:13.756 17:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:13.756 "name": "Existed_Raid", 00:10:13.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.756 "strip_size_kb": 64, 00:10:13.756 "state": "configuring", 00:10:13.756 "raid_level": "concat", 00:10:13.756 "superblock": false, 00:10:13.756 "num_base_bdevs": 3, 00:10:13.756 "num_base_bdevs_discovered": 1, 00:10:13.756 "num_base_bdevs_operational": 3, 00:10:13.756 "base_bdevs_list": [ 00:10:13.756 { 00:10:13.756 "name": "BaseBdev1", 00:10:13.756 "uuid": "bd813b29-42cf-11ef-96ac-773515fba644", 00:10:13.756 "is_configured": true, 00:10:13.756 "data_offset": 0, 00:10:13.756 "data_size": 65536 00:10:13.756 }, 00:10:13.756 { 00:10:13.756 "name": "BaseBdev2", 00:10:13.756 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.757 "is_configured": false, 00:10:13.757 "data_offset": 0, 00:10:13.757 "data_size": 0 00:10:13.757 }, 00:10:13.757 { 00:10:13.757 "name": "BaseBdev3", 00:10:13.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:13.757 "is_configured": false, 00:10:13.757 "data_offset": 0, 00:10:13.757 "data_size": 0 00:10:13.757 } 00:10:13.757 ] 00:10:13.757 }' 00:10:13.757 17:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:13.757 17:29:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:14.323 17:29:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:14.323 [2024-07-15 17:29:10.124305] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:14.323 BaseBdev2 00:10:14.323 17:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:14.323 17:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:10:14.323 17:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:14.323 17:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:14.323 17:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:14.323 17:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:14.323 17:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:14.580 17:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:14.838 [ 00:10:14.838 { 00:10:14.838 "name": "BaseBdev2", 00:10:14.838 "aliases": [ 00:10:14.838 "bef6294d-42cf-11ef-96ac-773515fba644" 00:10:14.838 ], 00:10:14.838 "product_name": "Malloc disk", 00:10:14.838 "block_size": 512, 00:10:14.838 "num_blocks": 65536, 00:10:14.838 "uuid": "bef6294d-42cf-11ef-96ac-773515fba644", 00:10:14.838 "assigned_rate_limits": { 00:10:14.838 "rw_ios_per_sec": 0, 00:10:14.838 "rw_mbytes_per_sec": 0, 00:10:14.838 "r_mbytes_per_sec": 0, 00:10:14.838 "w_mbytes_per_sec": 0 00:10:14.838 }, 00:10:14.838 "claimed": true, 00:10:14.838 "claim_type": "exclusive_write", 00:10:14.838 "zoned": false, 00:10:14.838 "supported_io_types": { 00:10:14.838 "read": true, 00:10:14.838 "write": true, 00:10:14.838 "unmap": true, 00:10:14.838 "flush": true, 00:10:14.838 "reset": true, 00:10:14.838 "nvme_admin": false, 00:10:14.838 "nvme_io": false, 00:10:14.838 "nvme_io_md": false, 00:10:14.838 "write_zeroes": true, 00:10:14.838 "zcopy": true, 00:10:14.838 "get_zone_info": false, 00:10:14.838 "zone_management": false, 00:10:14.838 "zone_append": false, 00:10:14.838 "compare": false, 00:10:14.838 "compare_and_write": false, 00:10:14.838 "abort": true, 00:10:14.838 "seek_hole": false, 00:10:14.838 "seek_data": false, 00:10:14.838 "copy": true, 00:10:14.838 "nvme_iov_md": false 00:10:14.838 }, 00:10:14.838 "memory_domains": [ 00:10:14.838 { 00:10:14.838 "dma_device_id": "system", 00:10:14.838 "dma_device_type": 1 00:10:14.838 }, 00:10:14.838 { 00:10:14.838 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:14.838 "dma_device_type": 2 00:10:14.838 } 00:10:14.838 ], 00:10:14.838 "driver_specific": {} 00:10:14.838 } 00:10:14.838 ] 00:10:14.838 17:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:14.838 17:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:14.838 17:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:14.838 17:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:14.838 17:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:14.838 17:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:14.838 17:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:14.838 17:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:14.838 17:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:14.838 17:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:14.838 17:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:14.838 17:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:14.838 17:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:14.838 17:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:14.838 17:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.096 17:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:15.096 "name": "Existed_Raid", 00:10:15.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.096 "strip_size_kb": 64, 00:10:15.096 "state": "configuring", 00:10:15.096 "raid_level": "concat", 00:10:15.096 "superblock": false, 00:10:15.096 "num_base_bdevs": 3, 00:10:15.096 "num_base_bdevs_discovered": 2, 00:10:15.096 "num_base_bdevs_operational": 3, 00:10:15.096 "base_bdevs_list": [ 00:10:15.096 { 00:10:15.096 "name": "BaseBdev1", 00:10:15.096 "uuid": "bd813b29-42cf-11ef-96ac-773515fba644", 00:10:15.096 "is_configured": true, 00:10:15.096 "data_offset": 0, 00:10:15.096 "data_size": 65536 00:10:15.096 }, 00:10:15.096 { 00:10:15.096 "name": "BaseBdev2", 00:10:15.096 "uuid": "bef6294d-42cf-11ef-96ac-773515fba644", 00:10:15.096 "is_configured": true, 00:10:15.096 "data_offset": 0, 00:10:15.096 "data_size": 65536 00:10:15.096 }, 00:10:15.096 { 00:10:15.096 "name": "BaseBdev3", 00:10:15.096 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.096 "is_configured": false, 00:10:15.096 "data_offset": 0, 00:10:15.096 "data_size": 0 00:10:15.096 } 00:10:15.096 ] 00:10:15.096 }' 00:10:15.096 17:29:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:15.096 17:29:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.355 17:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:15.613 [2024-07-15 17:29:11.392328] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:15.613 [2024-07-15 17:29:11.392361] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x101770a34a00 00:10:15.613 [2024-07-15 17:29:11.392366] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:15.613 [2024-07-15 17:29:11.392389] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x101770a97e20 00:10:15.613 [2024-07-15 17:29:11.392482] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x101770a34a00 00:10:15.613 [2024-07-15 17:29:11.392487] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x101770a34a00 00:10:15.613 [2024-07-15 17:29:11.392547] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.613 BaseBdev3 00:10:15.613 17:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:10:15.613 17:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:10:15.613 17:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:15.613 17:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:15.613 17:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:15.613 17:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:15.613 17:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:15.871 17:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:16.134 [ 00:10:16.134 { 00:10:16.134 "name": "BaseBdev3", 00:10:16.134 "aliases": [ 00:10:16.134 "bfb7a604-42cf-11ef-96ac-773515fba644" 00:10:16.134 ], 00:10:16.134 "product_name": "Malloc disk", 00:10:16.134 "block_size": 512, 00:10:16.134 "num_blocks": 65536, 00:10:16.134 "uuid": "bfb7a604-42cf-11ef-96ac-773515fba644", 00:10:16.134 "assigned_rate_limits": { 00:10:16.134 "rw_ios_per_sec": 0, 00:10:16.134 "rw_mbytes_per_sec": 0, 00:10:16.134 "r_mbytes_per_sec": 0, 00:10:16.134 "w_mbytes_per_sec": 0 00:10:16.134 }, 00:10:16.134 "claimed": true, 00:10:16.134 "claim_type": "exclusive_write", 00:10:16.134 "zoned": false, 00:10:16.134 "supported_io_types": { 00:10:16.134 "read": true, 00:10:16.134 "write": true, 00:10:16.134 "unmap": true, 00:10:16.134 "flush": true, 00:10:16.134 "reset": true, 00:10:16.134 "nvme_admin": false, 00:10:16.134 "nvme_io": false, 00:10:16.134 "nvme_io_md": false, 00:10:16.134 "write_zeroes": true, 00:10:16.134 "zcopy": true, 00:10:16.134 "get_zone_info": false, 00:10:16.134 "zone_management": false, 00:10:16.134 "zone_append": false, 00:10:16.134 "compare": false, 00:10:16.134 "compare_and_write": false, 00:10:16.134 "abort": true, 00:10:16.134 "seek_hole": false, 00:10:16.134 "seek_data": false, 00:10:16.134 "copy": true, 00:10:16.134 "nvme_iov_md": false 00:10:16.134 }, 00:10:16.134 "memory_domains": [ 00:10:16.134 { 00:10:16.134 "dma_device_id": "system", 00:10:16.134 "dma_device_type": 1 00:10:16.134 }, 00:10:16.134 { 00:10:16.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:16.134 "dma_device_type": 2 00:10:16.134 } 00:10:16.134 ], 00:10:16.134 "driver_specific": {} 00:10:16.134 } 00:10:16.134 ] 00:10:16.134 17:29:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:16.134 17:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:16.134 17:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:16.134 17:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:16.134 17:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:16.134 17:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:16.134 17:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:16.134 17:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:16.134 17:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:16.134 17:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:16.134 17:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:16.134 17:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:16.134 17:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:16.134 17:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.134 17:29:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:16.394 17:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:16.394 "name": "Existed_Raid", 00:10:16.394 "uuid": "bfb7ac84-42cf-11ef-96ac-773515fba644", 00:10:16.394 "strip_size_kb": 64, 00:10:16.394 "state": "online", 00:10:16.394 "raid_level": "concat", 00:10:16.394 "superblock": false, 00:10:16.394 "num_base_bdevs": 3, 00:10:16.394 "num_base_bdevs_discovered": 3, 00:10:16.394 "num_base_bdevs_operational": 3, 00:10:16.394 "base_bdevs_list": [ 00:10:16.394 { 00:10:16.394 "name": "BaseBdev1", 00:10:16.394 "uuid": "bd813b29-42cf-11ef-96ac-773515fba644", 00:10:16.394 "is_configured": true, 00:10:16.394 "data_offset": 0, 00:10:16.394 "data_size": 65536 00:10:16.394 }, 00:10:16.394 { 00:10:16.394 "name": "BaseBdev2", 00:10:16.394 "uuid": "bef6294d-42cf-11ef-96ac-773515fba644", 00:10:16.394 "is_configured": true, 00:10:16.394 "data_offset": 0, 00:10:16.394 "data_size": 65536 00:10:16.394 }, 00:10:16.394 { 00:10:16.394 "name": "BaseBdev3", 00:10:16.394 "uuid": "bfb7a604-42cf-11ef-96ac-773515fba644", 00:10:16.394 "is_configured": true, 00:10:16.394 "data_offset": 0, 00:10:16.394 "data_size": 65536 00:10:16.394 } 00:10:16.394 ] 00:10:16.394 }' 00:10:16.394 17:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:16.394 17:29:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.960 17:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:16.960 17:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:16.960 17:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:16.960 17:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:16.960 17:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:16.961 17:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:16.961 17:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:16.961 17:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:16.961 [2024-07-15 17:29:12.784240] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:17.219 17:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:17.219 "name": "Existed_Raid", 00:10:17.219 "aliases": [ 00:10:17.219 "bfb7ac84-42cf-11ef-96ac-773515fba644" 00:10:17.219 ], 00:10:17.219 "product_name": "Raid Volume", 00:10:17.219 "block_size": 512, 00:10:17.219 "num_blocks": 196608, 00:10:17.219 "uuid": "bfb7ac84-42cf-11ef-96ac-773515fba644", 00:10:17.219 "assigned_rate_limits": { 00:10:17.219 "rw_ios_per_sec": 0, 00:10:17.219 "rw_mbytes_per_sec": 0, 00:10:17.219 "r_mbytes_per_sec": 0, 00:10:17.219 "w_mbytes_per_sec": 0 00:10:17.219 }, 00:10:17.219 "claimed": false, 00:10:17.219 "zoned": false, 00:10:17.219 "supported_io_types": { 00:10:17.219 "read": true, 00:10:17.219 "write": true, 00:10:17.219 "unmap": true, 00:10:17.219 "flush": true, 00:10:17.219 "reset": true, 00:10:17.219 "nvme_admin": false, 00:10:17.219 "nvme_io": false, 00:10:17.219 "nvme_io_md": false, 00:10:17.219 "write_zeroes": true, 00:10:17.219 "zcopy": false, 00:10:17.219 "get_zone_info": false, 00:10:17.219 "zone_management": false, 00:10:17.219 "zone_append": false, 00:10:17.219 "compare": false, 00:10:17.219 "compare_and_write": false, 00:10:17.219 "abort": false, 00:10:17.219 "seek_hole": false, 00:10:17.219 "seek_data": false, 00:10:17.219 "copy": false, 00:10:17.219 "nvme_iov_md": false 00:10:17.219 }, 00:10:17.219 "memory_domains": [ 00:10:17.219 { 00:10:17.219 "dma_device_id": "system", 00:10:17.219 "dma_device_type": 1 00:10:17.219 }, 00:10:17.219 { 00:10:17.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.219 "dma_device_type": 2 00:10:17.219 }, 00:10:17.219 { 00:10:17.219 "dma_device_id": "system", 00:10:17.219 "dma_device_type": 1 00:10:17.219 }, 00:10:17.219 { 00:10:17.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.220 "dma_device_type": 2 00:10:17.220 }, 00:10:17.220 { 00:10:17.220 "dma_device_id": "system", 00:10:17.220 "dma_device_type": 1 00:10:17.220 }, 00:10:17.220 { 00:10:17.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.220 "dma_device_type": 2 00:10:17.220 } 00:10:17.220 ], 00:10:17.220 "driver_specific": { 00:10:17.220 "raid": { 00:10:17.220 "uuid": "bfb7ac84-42cf-11ef-96ac-773515fba644", 00:10:17.220 "strip_size_kb": 64, 00:10:17.220 "state": "online", 00:10:17.220 "raid_level": "concat", 00:10:17.220 "superblock": false, 00:10:17.220 "num_base_bdevs": 3, 00:10:17.220 "num_base_bdevs_discovered": 3, 00:10:17.220 "num_base_bdevs_operational": 3, 00:10:17.220 "base_bdevs_list": [ 00:10:17.220 { 00:10:17.220 "name": "BaseBdev1", 00:10:17.220 "uuid": "bd813b29-42cf-11ef-96ac-773515fba644", 00:10:17.220 "is_configured": true, 00:10:17.220 "data_offset": 0, 00:10:17.220 "data_size": 65536 00:10:17.220 }, 00:10:17.220 { 00:10:17.220 "name": "BaseBdev2", 00:10:17.220 "uuid": "bef6294d-42cf-11ef-96ac-773515fba644", 00:10:17.220 "is_configured": true, 00:10:17.220 "data_offset": 0, 00:10:17.220 "data_size": 65536 00:10:17.220 }, 00:10:17.220 { 00:10:17.220 "name": "BaseBdev3", 00:10:17.220 "uuid": "bfb7a604-42cf-11ef-96ac-773515fba644", 00:10:17.220 "is_configured": true, 00:10:17.220 "data_offset": 0, 00:10:17.220 "data_size": 65536 00:10:17.220 } 00:10:17.220 ] 00:10:17.220 } 00:10:17.220 } 00:10:17.220 }' 00:10:17.220 17:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:17.220 17:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:17.220 BaseBdev2 00:10:17.220 BaseBdev3' 00:10:17.220 17:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:17.220 17:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:17.220 17:29:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:17.478 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:17.478 "name": "BaseBdev1", 00:10:17.478 "aliases": [ 00:10:17.478 "bd813b29-42cf-11ef-96ac-773515fba644" 00:10:17.478 ], 00:10:17.478 "product_name": "Malloc disk", 00:10:17.478 "block_size": 512, 00:10:17.478 "num_blocks": 65536, 00:10:17.478 "uuid": "bd813b29-42cf-11ef-96ac-773515fba644", 00:10:17.478 "assigned_rate_limits": { 00:10:17.478 "rw_ios_per_sec": 0, 00:10:17.478 "rw_mbytes_per_sec": 0, 00:10:17.478 "r_mbytes_per_sec": 0, 00:10:17.478 "w_mbytes_per_sec": 0 00:10:17.478 }, 00:10:17.478 "claimed": true, 00:10:17.478 "claim_type": "exclusive_write", 00:10:17.478 "zoned": false, 00:10:17.478 "supported_io_types": { 00:10:17.478 "read": true, 00:10:17.478 "write": true, 00:10:17.478 "unmap": true, 00:10:17.478 "flush": true, 00:10:17.478 "reset": true, 00:10:17.478 "nvme_admin": false, 00:10:17.478 "nvme_io": false, 00:10:17.478 "nvme_io_md": false, 00:10:17.478 "write_zeroes": true, 00:10:17.478 "zcopy": true, 00:10:17.478 "get_zone_info": false, 00:10:17.478 "zone_management": false, 00:10:17.478 "zone_append": false, 00:10:17.478 "compare": false, 00:10:17.478 "compare_and_write": false, 00:10:17.478 "abort": true, 00:10:17.478 "seek_hole": false, 00:10:17.478 "seek_data": false, 00:10:17.478 "copy": true, 00:10:17.478 "nvme_iov_md": false 00:10:17.478 }, 00:10:17.478 "memory_domains": [ 00:10:17.478 { 00:10:17.478 "dma_device_id": "system", 00:10:17.478 "dma_device_type": 1 00:10:17.478 }, 00:10:17.478 { 00:10:17.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.478 "dma_device_type": 2 00:10:17.478 } 00:10:17.478 ], 00:10:17.478 "driver_specific": {} 00:10:17.478 }' 00:10:17.478 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:17.478 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:17.478 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:17.478 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:17.478 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:17.478 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:17.478 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:17.478 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:17.478 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:17.478 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:17.478 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:17.478 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:17.478 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:17.478 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:17.478 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:17.736 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:17.736 "name": "BaseBdev2", 00:10:17.736 "aliases": [ 00:10:17.736 "bef6294d-42cf-11ef-96ac-773515fba644" 00:10:17.736 ], 00:10:17.736 "product_name": "Malloc disk", 00:10:17.736 "block_size": 512, 00:10:17.736 "num_blocks": 65536, 00:10:17.736 "uuid": "bef6294d-42cf-11ef-96ac-773515fba644", 00:10:17.736 "assigned_rate_limits": { 00:10:17.736 "rw_ios_per_sec": 0, 00:10:17.736 "rw_mbytes_per_sec": 0, 00:10:17.736 "r_mbytes_per_sec": 0, 00:10:17.736 "w_mbytes_per_sec": 0 00:10:17.736 }, 00:10:17.736 "claimed": true, 00:10:17.736 "claim_type": "exclusive_write", 00:10:17.736 "zoned": false, 00:10:17.736 "supported_io_types": { 00:10:17.736 "read": true, 00:10:17.736 "write": true, 00:10:17.736 "unmap": true, 00:10:17.736 "flush": true, 00:10:17.736 "reset": true, 00:10:17.736 "nvme_admin": false, 00:10:17.736 "nvme_io": false, 00:10:17.736 "nvme_io_md": false, 00:10:17.736 "write_zeroes": true, 00:10:17.736 "zcopy": true, 00:10:17.736 "get_zone_info": false, 00:10:17.736 "zone_management": false, 00:10:17.736 "zone_append": false, 00:10:17.736 "compare": false, 00:10:17.736 "compare_and_write": false, 00:10:17.736 "abort": true, 00:10:17.736 "seek_hole": false, 00:10:17.736 "seek_data": false, 00:10:17.736 "copy": true, 00:10:17.736 "nvme_iov_md": false 00:10:17.736 }, 00:10:17.736 "memory_domains": [ 00:10:17.736 { 00:10:17.736 "dma_device_id": "system", 00:10:17.736 "dma_device_type": 1 00:10:17.736 }, 00:10:17.736 { 00:10:17.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.736 "dma_device_type": 2 00:10:17.736 } 00:10:17.736 ], 00:10:17.736 "driver_specific": {} 00:10:17.736 }' 00:10:17.736 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:17.736 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:17.736 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:17.736 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:17.736 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:17.736 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:17.736 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:17.736 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:17.736 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:17.736 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:17.736 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:17.736 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:17.736 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:17.736 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:10:17.736 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:18.009 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:18.009 "name": "BaseBdev3", 00:10:18.009 "aliases": [ 00:10:18.009 "bfb7a604-42cf-11ef-96ac-773515fba644" 00:10:18.009 ], 00:10:18.009 "product_name": "Malloc disk", 00:10:18.009 "block_size": 512, 00:10:18.009 "num_blocks": 65536, 00:10:18.009 "uuid": "bfb7a604-42cf-11ef-96ac-773515fba644", 00:10:18.009 "assigned_rate_limits": { 00:10:18.009 "rw_ios_per_sec": 0, 00:10:18.009 "rw_mbytes_per_sec": 0, 00:10:18.009 "r_mbytes_per_sec": 0, 00:10:18.009 "w_mbytes_per_sec": 0 00:10:18.009 }, 00:10:18.009 "claimed": true, 00:10:18.009 "claim_type": "exclusive_write", 00:10:18.009 "zoned": false, 00:10:18.009 "supported_io_types": { 00:10:18.009 "read": true, 00:10:18.009 "write": true, 00:10:18.009 "unmap": true, 00:10:18.009 "flush": true, 00:10:18.009 "reset": true, 00:10:18.009 "nvme_admin": false, 00:10:18.009 "nvme_io": false, 00:10:18.009 "nvme_io_md": false, 00:10:18.009 "write_zeroes": true, 00:10:18.009 "zcopy": true, 00:10:18.009 "get_zone_info": false, 00:10:18.009 "zone_management": false, 00:10:18.009 "zone_append": false, 00:10:18.009 "compare": false, 00:10:18.009 "compare_and_write": false, 00:10:18.009 "abort": true, 00:10:18.009 "seek_hole": false, 00:10:18.009 "seek_data": false, 00:10:18.009 "copy": true, 00:10:18.009 "nvme_iov_md": false 00:10:18.009 }, 00:10:18.009 "memory_domains": [ 00:10:18.009 { 00:10:18.009 "dma_device_id": "system", 00:10:18.009 "dma_device_type": 1 00:10:18.009 }, 00:10:18.009 { 00:10:18.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:18.009 "dma_device_type": 2 00:10:18.009 } 00:10:18.009 ], 00:10:18.009 "driver_specific": {} 00:10:18.009 }' 00:10:18.009 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:18.009 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:18.009 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:18.009 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:18.009 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:18.009 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:18.009 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:18.009 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:18.267 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:18.267 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:18.267 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:18.267 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:18.267 17:29:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:18.525 [2024-07-15 17:29:14.128236] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:18.525 [2024-07-15 17:29:14.128261] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:18.525 [2024-07-15 17:29:14.128291] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:18.525 17:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:18.525 17:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:10:18.525 17:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:18.525 17:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:18.525 17:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:10:18.525 17:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:18.525 17:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:18.525 17:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:10:18.525 17:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:18.525 17:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:18.525 17:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:18.525 17:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:18.525 17:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:18.525 17:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:18.525 17:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:18.525 17:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.525 17:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:18.802 17:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:18.802 "name": "Existed_Raid", 00:10:18.802 "uuid": "bfb7ac84-42cf-11ef-96ac-773515fba644", 00:10:18.802 "strip_size_kb": 64, 00:10:18.802 "state": "offline", 00:10:18.802 "raid_level": "concat", 00:10:18.802 "superblock": false, 00:10:18.802 "num_base_bdevs": 3, 00:10:18.802 "num_base_bdevs_discovered": 2, 00:10:18.802 "num_base_bdevs_operational": 2, 00:10:18.802 "base_bdevs_list": [ 00:10:18.802 { 00:10:18.802 "name": null, 00:10:18.802 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.802 "is_configured": false, 00:10:18.802 "data_offset": 0, 00:10:18.802 "data_size": 65536 00:10:18.802 }, 00:10:18.802 { 00:10:18.802 "name": "BaseBdev2", 00:10:18.802 "uuid": "bef6294d-42cf-11ef-96ac-773515fba644", 00:10:18.802 "is_configured": true, 00:10:18.802 "data_offset": 0, 00:10:18.802 "data_size": 65536 00:10:18.802 }, 00:10:18.802 { 00:10:18.802 "name": "BaseBdev3", 00:10:18.802 "uuid": "bfb7a604-42cf-11ef-96ac-773515fba644", 00:10:18.802 "is_configured": true, 00:10:18.802 "data_offset": 0, 00:10:18.802 "data_size": 65536 00:10:18.802 } 00:10:18.802 ] 00:10:18.802 }' 00:10:18.802 17:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:18.802 17:29:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.069 17:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:19.069 17:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:19.069 17:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:19.069 17:29:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:19.325 17:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:19.325 17:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:19.325 17:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:19.582 [2024-07-15 17:29:15.226270] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:19.582 17:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:19.582 17:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:19.582 17:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:19.582 17:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:19.840 17:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:19.840 17:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:19.840 17:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:10:20.098 [2024-07-15 17:29:15.704277] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:20.098 [2024-07-15 17:29:15.704323] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x101770a34a00 name Existed_Raid, state offline 00:10:20.098 17:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:20.098 17:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:20.098 17:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:20.098 17:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:20.356 17:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:20.356 17:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:20.356 17:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:10:20.356 17:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:10:20.356 17:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:20.356 17:29:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:20.614 BaseBdev2 00:10:20.614 17:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:10:20.614 17:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:10:20.614 17:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:20.614 17:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:20.614 17:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:20.614 17:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:20.614 17:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:20.873 17:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:21.131 [ 00:10:21.131 { 00:10:21.131 "name": "BaseBdev2", 00:10:21.131 "aliases": [ 00:10:21.131 "c29948cc-42cf-11ef-96ac-773515fba644" 00:10:21.131 ], 00:10:21.131 "product_name": "Malloc disk", 00:10:21.131 "block_size": 512, 00:10:21.131 "num_blocks": 65536, 00:10:21.131 "uuid": "c29948cc-42cf-11ef-96ac-773515fba644", 00:10:21.131 "assigned_rate_limits": { 00:10:21.131 "rw_ios_per_sec": 0, 00:10:21.131 "rw_mbytes_per_sec": 0, 00:10:21.131 "r_mbytes_per_sec": 0, 00:10:21.131 "w_mbytes_per_sec": 0 00:10:21.131 }, 00:10:21.131 "claimed": false, 00:10:21.131 "zoned": false, 00:10:21.131 "supported_io_types": { 00:10:21.131 "read": true, 00:10:21.131 "write": true, 00:10:21.131 "unmap": true, 00:10:21.131 "flush": true, 00:10:21.131 "reset": true, 00:10:21.131 "nvme_admin": false, 00:10:21.131 "nvme_io": false, 00:10:21.131 "nvme_io_md": false, 00:10:21.131 "write_zeroes": true, 00:10:21.131 "zcopy": true, 00:10:21.131 "get_zone_info": false, 00:10:21.131 "zone_management": false, 00:10:21.132 "zone_append": false, 00:10:21.132 "compare": false, 00:10:21.132 "compare_and_write": false, 00:10:21.132 "abort": true, 00:10:21.132 "seek_hole": false, 00:10:21.132 "seek_data": false, 00:10:21.132 "copy": true, 00:10:21.132 "nvme_iov_md": false 00:10:21.132 }, 00:10:21.132 "memory_domains": [ 00:10:21.132 { 00:10:21.132 "dma_device_id": "system", 00:10:21.132 "dma_device_type": 1 00:10:21.132 }, 00:10:21.132 { 00:10:21.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.132 "dma_device_type": 2 00:10:21.132 } 00:10:21.132 ], 00:10:21.132 "driver_specific": {} 00:10:21.132 } 00:10:21.132 ] 00:10:21.132 17:29:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:21.132 17:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:10:21.132 17:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:21.132 17:29:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:21.390 BaseBdev3 00:10:21.390 17:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:10:21.390 17:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:10:21.390 17:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:21.390 17:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:21.390 17:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:21.390 17:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:21.390 17:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:21.648 17:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:21.906 [ 00:10:21.906 { 00:10:21.906 "name": "BaseBdev3", 00:10:21.906 "aliases": [ 00:10:21.906 "c318dab0-42cf-11ef-96ac-773515fba644" 00:10:21.906 ], 00:10:21.906 "product_name": "Malloc disk", 00:10:21.906 "block_size": 512, 00:10:21.906 "num_blocks": 65536, 00:10:21.906 "uuid": "c318dab0-42cf-11ef-96ac-773515fba644", 00:10:21.906 "assigned_rate_limits": { 00:10:21.906 "rw_ios_per_sec": 0, 00:10:21.906 "rw_mbytes_per_sec": 0, 00:10:21.906 "r_mbytes_per_sec": 0, 00:10:21.906 "w_mbytes_per_sec": 0 00:10:21.906 }, 00:10:21.906 "claimed": false, 00:10:21.906 "zoned": false, 00:10:21.906 "supported_io_types": { 00:10:21.906 "read": true, 00:10:21.906 "write": true, 00:10:21.906 "unmap": true, 00:10:21.906 "flush": true, 00:10:21.906 "reset": true, 00:10:21.906 "nvme_admin": false, 00:10:21.906 "nvme_io": false, 00:10:21.906 "nvme_io_md": false, 00:10:21.906 "write_zeroes": true, 00:10:21.906 "zcopy": true, 00:10:21.906 "get_zone_info": false, 00:10:21.906 "zone_management": false, 00:10:21.906 "zone_append": false, 00:10:21.906 "compare": false, 00:10:21.906 "compare_and_write": false, 00:10:21.906 "abort": true, 00:10:21.906 "seek_hole": false, 00:10:21.906 "seek_data": false, 00:10:21.906 "copy": true, 00:10:21.906 "nvme_iov_md": false 00:10:21.906 }, 00:10:21.906 "memory_domains": [ 00:10:21.906 { 00:10:21.907 "dma_device_id": "system", 00:10:21.907 "dma_device_type": 1 00:10:21.907 }, 00:10:21.907 { 00:10:21.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.907 "dma_device_type": 2 00:10:21.907 } 00:10:21.907 ], 00:10:21.907 "driver_specific": {} 00:10:21.907 } 00:10:21.907 ] 00:10:21.907 17:29:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:21.907 17:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:10:21.907 17:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:21.907 17:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:22.165 [2024-07-15 17:29:17.790407] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:22.165 [2024-07-15 17:29:17.790457] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:22.165 [2024-07-15 17:29:17.790465] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:22.165 [2024-07-15 17:29:17.791028] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:22.165 17:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:22.165 17:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:22.165 17:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:22.165 17:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:22.165 17:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:22.165 17:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:22.165 17:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:22.165 17:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:22.165 17:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:22.165 17:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:22.165 17:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:22.165 17:29:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.424 17:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:22.424 "name": "Existed_Raid", 00:10:22.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.424 "strip_size_kb": 64, 00:10:22.424 "state": "configuring", 00:10:22.424 "raid_level": "concat", 00:10:22.424 "superblock": false, 00:10:22.424 "num_base_bdevs": 3, 00:10:22.424 "num_base_bdevs_discovered": 2, 00:10:22.424 "num_base_bdevs_operational": 3, 00:10:22.424 "base_bdevs_list": [ 00:10:22.424 { 00:10:22.424 "name": "BaseBdev1", 00:10:22.424 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.424 "is_configured": false, 00:10:22.424 "data_offset": 0, 00:10:22.424 "data_size": 0 00:10:22.424 }, 00:10:22.424 { 00:10:22.424 "name": "BaseBdev2", 00:10:22.424 "uuid": "c29948cc-42cf-11ef-96ac-773515fba644", 00:10:22.424 "is_configured": true, 00:10:22.424 "data_offset": 0, 00:10:22.424 "data_size": 65536 00:10:22.424 }, 00:10:22.424 { 00:10:22.424 "name": "BaseBdev3", 00:10:22.424 "uuid": "c318dab0-42cf-11ef-96ac-773515fba644", 00:10:22.424 "is_configured": true, 00:10:22.424 "data_offset": 0, 00:10:22.424 "data_size": 65536 00:10:22.424 } 00:10:22.424 ] 00:10:22.424 }' 00:10:22.424 17:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:22.424 17:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.683 17:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:10:22.941 [2024-07-15 17:29:18.646438] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:22.941 17:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:22.941 17:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:22.941 17:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:22.941 17:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:22.941 17:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:22.941 17:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:22.941 17:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:22.941 17:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:22.941 17:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:22.941 17:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:22.941 17:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:22.941 17:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.198 17:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:23.198 "name": "Existed_Raid", 00:10:23.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.198 "strip_size_kb": 64, 00:10:23.198 "state": "configuring", 00:10:23.198 "raid_level": "concat", 00:10:23.198 "superblock": false, 00:10:23.198 "num_base_bdevs": 3, 00:10:23.198 "num_base_bdevs_discovered": 1, 00:10:23.198 "num_base_bdevs_operational": 3, 00:10:23.198 "base_bdevs_list": [ 00:10:23.198 { 00:10:23.198 "name": "BaseBdev1", 00:10:23.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.198 "is_configured": false, 00:10:23.198 "data_offset": 0, 00:10:23.198 "data_size": 0 00:10:23.198 }, 00:10:23.198 { 00:10:23.198 "name": null, 00:10:23.198 "uuid": "c29948cc-42cf-11ef-96ac-773515fba644", 00:10:23.198 "is_configured": false, 00:10:23.198 "data_offset": 0, 00:10:23.198 "data_size": 65536 00:10:23.198 }, 00:10:23.198 { 00:10:23.198 "name": "BaseBdev3", 00:10:23.198 "uuid": "c318dab0-42cf-11ef-96ac-773515fba644", 00:10:23.198 "is_configured": true, 00:10:23.198 "data_offset": 0, 00:10:23.198 "data_size": 65536 00:10:23.198 } 00:10:23.198 ] 00:10:23.198 }' 00:10:23.198 17:29:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:23.198 17:29:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.455 17:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:23.455 17:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:23.714 17:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:10:23.714 17:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:23.972 [2024-07-15 17:29:19.778579] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:23.972 BaseBdev1 00:10:23.972 17:29:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:10:23.972 17:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:10:23.972 17:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:23.972 17:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:23.972 17:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:23.972 17:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:23.972 17:29:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:24.230 17:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:24.489 [ 00:10:24.489 { 00:10:24.489 "name": "BaseBdev1", 00:10:24.489 "aliases": [ 00:10:24.489 "c4b749e5-42cf-11ef-96ac-773515fba644" 00:10:24.489 ], 00:10:24.489 "product_name": "Malloc disk", 00:10:24.489 "block_size": 512, 00:10:24.489 "num_blocks": 65536, 00:10:24.489 "uuid": "c4b749e5-42cf-11ef-96ac-773515fba644", 00:10:24.489 "assigned_rate_limits": { 00:10:24.489 "rw_ios_per_sec": 0, 00:10:24.489 "rw_mbytes_per_sec": 0, 00:10:24.489 "r_mbytes_per_sec": 0, 00:10:24.489 "w_mbytes_per_sec": 0 00:10:24.489 }, 00:10:24.489 "claimed": true, 00:10:24.489 "claim_type": "exclusive_write", 00:10:24.489 "zoned": false, 00:10:24.489 "supported_io_types": { 00:10:24.489 "read": true, 00:10:24.489 "write": true, 00:10:24.489 "unmap": true, 00:10:24.489 "flush": true, 00:10:24.489 "reset": true, 00:10:24.489 "nvme_admin": false, 00:10:24.489 "nvme_io": false, 00:10:24.489 "nvme_io_md": false, 00:10:24.489 "write_zeroes": true, 00:10:24.489 "zcopy": true, 00:10:24.489 "get_zone_info": false, 00:10:24.489 "zone_management": false, 00:10:24.489 "zone_append": false, 00:10:24.489 "compare": false, 00:10:24.489 "compare_and_write": false, 00:10:24.489 "abort": true, 00:10:24.489 "seek_hole": false, 00:10:24.489 "seek_data": false, 00:10:24.489 "copy": true, 00:10:24.489 "nvme_iov_md": false 00:10:24.489 }, 00:10:24.489 "memory_domains": [ 00:10:24.489 { 00:10:24.489 "dma_device_id": "system", 00:10:24.489 "dma_device_type": 1 00:10:24.489 }, 00:10:24.489 { 00:10:24.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.489 "dma_device_type": 2 00:10:24.489 } 00:10:24.489 ], 00:10:24.489 "driver_specific": {} 00:10:24.489 } 00:10:24.489 ] 00:10:24.490 17:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:24.490 17:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:24.490 17:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:24.490 17:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:24.490 17:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:24.490 17:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:24.490 17:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:24.490 17:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:24.490 17:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:24.490 17:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:24.490 17:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:24.490 17:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:24.490 17:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.058 17:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:25.058 "name": "Existed_Raid", 00:10:25.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.058 "strip_size_kb": 64, 00:10:25.058 "state": "configuring", 00:10:25.058 "raid_level": "concat", 00:10:25.058 "superblock": false, 00:10:25.058 "num_base_bdevs": 3, 00:10:25.058 "num_base_bdevs_discovered": 2, 00:10:25.058 "num_base_bdevs_operational": 3, 00:10:25.058 "base_bdevs_list": [ 00:10:25.058 { 00:10:25.058 "name": "BaseBdev1", 00:10:25.058 "uuid": "c4b749e5-42cf-11ef-96ac-773515fba644", 00:10:25.058 "is_configured": true, 00:10:25.058 "data_offset": 0, 00:10:25.058 "data_size": 65536 00:10:25.058 }, 00:10:25.058 { 00:10:25.058 "name": null, 00:10:25.058 "uuid": "c29948cc-42cf-11ef-96ac-773515fba644", 00:10:25.058 "is_configured": false, 00:10:25.058 "data_offset": 0, 00:10:25.058 "data_size": 65536 00:10:25.058 }, 00:10:25.058 { 00:10:25.058 "name": "BaseBdev3", 00:10:25.058 "uuid": "c318dab0-42cf-11ef-96ac-773515fba644", 00:10:25.058 "is_configured": true, 00:10:25.058 "data_offset": 0, 00:10:25.058 "data_size": 65536 00:10:25.058 } 00:10:25.058 ] 00:10:25.058 }' 00:10:25.058 17:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:25.058 17:29:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.316 17:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:25.316 17:29:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:25.574 17:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:10:25.574 17:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:10:25.833 [2024-07-15 17:29:21.406465] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:25.833 17:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:25.833 17:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:25.833 17:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:25.833 17:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:25.833 17:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:25.833 17:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:25.833 17:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:25.833 17:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:25.833 17:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:25.833 17:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:25.833 17:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:25.833 17:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.091 17:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:26.091 "name": "Existed_Raid", 00:10:26.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.091 "strip_size_kb": 64, 00:10:26.091 "state": "configuring", 00:10:26.091 "raid_level": "concat", 00:10:26.091 "superblock": false, 00:10:26.091 "num_base_bdevs": 3, 00:10:26.091 "num_base_bdevs_discovered": 1, 00:10:26.091 "num_base_bdevs_operational": 3, 00:10:26.091 "base_bdevs_list": [ 00:10:26.091 { 00:10:26.091 "name": "BaseBdev1", 00:10:26.091 "uuid": "c4b749e5-42cf-11ef-96ac-773515fba644", 00:10:26.091 "is_configured": true, 00:10:26.091 "data_offset": 0, 00:10:26.091 "data_size": 65536 00:10:26.091 }, 00:10:26.091 { 00:10:26.091 "name": null, 00:10:26.091 "uuid": "c29948cc-42cf-11ef-96ac-773515fba644", 00:10:26.091 "is_configured": false, 00:10:26.091 "data_offset": 0, 00:10:26.091 "data_size": 65536 00:10:26.091 }, 00:10:26.091 { 00:10:26.091 "name": null, 00:10:26.091 "uuid": "c318dab0-42cf-11ef-96ac-773515fba644", 00:10:26.091 "is_configured": false, 00:10:26.091 "data_offset": 0, 00:10:26.091 "data_size": 65536 00:10:26.091 } 00:10:26.091 ] 00:10:26.091 }' 00:10:26.091 17:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:26.091 17:29:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.358 17:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:26.358 17:29:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:26.617 17:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:10:26.617 17:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:26.938 [2024-07-15 17:29:22.562498] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:26.938 17:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:26.938 17:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:26.938 17:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:26.938 17:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:26.938 17:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:26.938 17:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:26.938 17:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:26.938 17:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:26.938 17:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:26.938 17:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:26.938 17:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:26.938 17:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.196 17:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:27.196 "name": "Existed_Raid", 00:10:27.196 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.196 "strip_size_kb": 64, 00:10:27.196 "state": "configuring", 00:10:27.196 "raid_level": "concat", 00:10:27.196 "superblock": false, 00:10:27.196 "num_base_bdevs": 3, 00:10:27.196 "num_base_bdevs_discovered": 2, 00:10:27.196 "num_base_bdevs_operational": 3, 00:10:27.196 "base_bdevs_list": [ 00:10:27.196 { 00:10:27.196 "name": "BaseBdev1", 00:10:27.196 "uuid": "c4b749e5-42cf-11ef-96ac-773515fba644", 00:10:27.196 "is_configured": true, 00:10:27.196 "data_offset": 0, 00:10:27.196 "data_size": 65536 00:10:27.196 }, 00:10:27.196 { 00:10:27.196 "name": null, 00:10:27.196 "uuid": "c29948cc-42cf-11ef-96ac-773515fba644", 00:10:27.196 "is_configured": false, 00:10:27.196 "data_offset": 0, 00:10:27.196 "data_size": 65536 00:10:27.196 }, 00:10:27.196 { 00:10:27.196 "name": "BaseBdev3", 00:10:27.196 "uuid": "c318dab0-42cf-11ef-96ac-773515fba644", 00:10:27.196 "is_configured": true, 00:10:27.196 "data_offset": 0, 00:10:27.196 "data_size": 65536 00:10:27.196 } 00:10:27.196 ] 00:10:27.196 }' 00:10:27.196 17:29:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:27.196 17:29:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.453 17:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:27.453 17:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:27.711 17:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:10:27.711 17:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:27.969 [2024-07-15 17:29:23.578521] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:27.969 17:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:27.969 17:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:27.969 17:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:27.969 17:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:27.969 17:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:27.969 17:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:27.969 17:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:27.969 17:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:27.969 17:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:27.969 17:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:27.969 17:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:27.969 17:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.226 17:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:28.226 "name": "Existed_Raid", 00:10:28.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.226 "strip_size_kb": 64, 00:10:28.226 "state": "configuring", 00:10:28.226 "raid_level": "concat", 00:10:28.226 "superblock": false, 00:10:28.226 "num_base_bdevs": 3, 00:10:28.226 "num_base_bdevs_discovered": 1, 00:10:28.226 "num_base_bdevs_operational": 3, 00:10:28.226 "base_bdevs_list": [ 00:10:28.226 { 00:10:28.226 "name": null, 00:10:28.226 "uuid": "c4b749e5-42cf-11ef-96ac-773515fba644", 00:10:28.226 "is_configured": false, 00:10:28.226 "data_offset": 0, 00:10:28.226 "data_size": 65536 00:10:28.226 }, 00:10:28.226 { 00:10:28.226 "name": null, 00:10:28.226 "uuid": "c29948cc-42cf-11ef-96ac-773515fba644", 00:10:28.226 "is_configured": false, 00:10:28.226 "data_offset": 0, 00:10:28.226 "data_size": 65536 00:10:28.226 }, 00:10:28.226 { 00:10:28.226 "name": "BaseBdev3", 00:10:28.226 "uuid": "c318dab0-42cf-11ef-96ac-773515fba644", 00:10:28.226 "is_configured": true, 00:10:28.226 "data_offset": 0, 00:10:28.227 "data_size": 65536 00:10:28.227 } 00:10:28.227 ] 00:10:28.227 }' 00:10:28.227 17:29:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:28.227 17:29:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.484 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:28.484 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:28.743 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:10:28.743 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:29.001 [2024-07-15 17:29:24.644411] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:29.001 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:29.001 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:29.001 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:29.001 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:29.001 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:29.002 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:29.002 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:29.002 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:29.002 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:29.002 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:29.002 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:29.002 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.259 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:29.259 "name": "Existed_Raid", 00:10:29.259 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.259 "strip_size_kb": 64, 00:10:29.259 "state": "configuring", 00:10:29.259 "raid_level": "concat", 00:10:29.259 "superblock": false, 00:10:29.259 "num_base_bdevs": 3, 00:10:29.259 "num_base_bdevs_discovered": 2, 00:10:29.259 "num_base_bdevs_operational": 3, 00:10:29.259 "base_bdevs_list": [ 00:10:29.259 { 00:10:29.259 "name": null, 00:10:29.259 "uuid": "c4b749e5-42cf-11ef-96ac-773515fba644", 00:10:29.259 "is_configured": false, 00:10:29.259 "data_offset": 0, 00:10:29.259 "data_size": 65536 00:10:29.259 }, 00:10:29.259 { 00:10:29.259 "name": "BaseBdev2", 00:10:29.259 "uuid": "c29948cc-42cf-11ef-96ac-773515fba644", 00:10:29.259 "is_configured": true, 00:10:29.259 "data_offset": 0, 00:10:29.259 "data_size": 65536 00:10:29.259 }, 00:10:29.259 { 00:10:29.259 "name": "BaseBdev3", 00:10:29.259 "uuid": "c318dab0-42cf-11ef-96ac-773515fba644", 00:10:29.259 "is_configured": true, 00:10:29.259 "data_offset": 0, 00:10:29.259 "data_size": 65536 00:10:29.259 } 00:10:29.259 ] 00:10:29.259 }' 00:10:29.259 17:29:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:29.259 17:29:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.517 17:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:29.517 17:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:29.775 17:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:10:29.775 17:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:29.775 17:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:30.033 17:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u c4b749e5-42cf-11ef-96ac-773515fba644 00:10:30.292 [2024-07-15 17:29:25.912594] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:30.292 [2024-07-15 17:29:25.912620] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x101770a34a00 00:10:30.292 [2024-07-15 17:29:25.912625] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:30.292 [2024-07-15 17:29:25.912649] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x101770a97e20 00:10:30.292 [2024-07-15 17:29:25.912725] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x101770a34a00 00:10:30.292 [2024-07-15 17:29:25.912730] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x101770a34a00 00:10:30.292 [2024-07-15 17:29:25.912762] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.292 NewBaseBdev 00:10:30.292 17:29:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:10:30.292 17:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:10:30.292 17:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:30.292 17:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:30.292 17:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:30.292 17:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:30.292 17:29:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:30.550 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:30.550 [ 00:10:30.550 { 00:10:30.550 "name": "NewBaseBdev", 00:10:30.550 "aliases": [ 00:10:30.550 "c4b749e5-42cf-11ef-96ac-773515fba644" 00:10:30.550 ], 00:10:30.550 "product_name": "Malloc disk", 00:10:30.550 "block_size": 512, 00:10:30.550 "num_blocks": 65536, 00:10:30.550 "uuid": "c4b749e5-42cf-11ef-96ac-773515fba644", 00:10:30.550 "assigned_rate_limits": { 00:10:30.550 "rw_ios_per_sec": 0, 00:10:30.550 "rw_mbytes_per_sec": 0, 00:10:30.550 "r_mbytes_per_sec": 0, 00:10:30.550 "w_mbytes_per_sec": 0 00:10:30.550 }, 00:10:30.550 "claimed": true, 00:10:30.550 "claim_type": "exclusive_write", 00:10:30.550 "zoned": false, 00:10:30.550 "supported_io_types": { 00:10:30.550 "read": true, 00:10:30.550 "write": true, 00:10:30.550 "unmap": true, 00:10:30.550 "flush": true, 00:10:30.550 "reset": true, 00:10:30.550 "nvme_admin": false, 00:10:30.550 "nvme_io": false, 00:10:30.550 "nvme_io_md": false, 00:10:30.550 "write_zeroes": true, 00:10:30.550 "zcopy": true, 00:10:30.550 "get_zone_info": false, 00:10:30.550 "zone_management": false, 00:10:30.550 "zone_append": false, 00:10:30.550 "compare": false, 00:10:30.550 "compare_and_write": false, 00:10:30.550 "abort": true, 00:10:30.550 "seek_hole": false, 00:10:30.550 "seek_data": false, 00:10:30.550 "copy": true, 00:10:30.550 "nvme_iov_md": false 00:10:30.550 }, 00:10:30.550 "memory_domains": [ 00:10:30.550 { 00:10:30.550 "dma_device_id": "system", 00:10:30.550 "dma_device_type": 1 00:10:30.550 }, 00:10:30.550 { 00:10:30.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.550 "dma_device_type": 2 00:10:30.550 } 00:10:30.550 ], 00:10:30.550 "driver_specific": {} 00:10:30.550 } 00:10:30.550 ] 00:10:30.808 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:30.808 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:30.808 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:30.808 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:30.808 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:30.808 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:30.808 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:30.808 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:30.808 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:30.808 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:30.808 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:30.808 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:30.808 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.066 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:31.066 "name": "Existed_Raid", 00:10:31.066 "uuid": "c85f4a2e-42cf-11ef-96ac-773515fba644", 00:10:31.067 "strip_size_kb": 64, 00:10:31.067 "state": "online", 00:10:31.067 "raid_level": "concat", 00:10:31.067 "superblock": false, 00:10:31.067 "num_base_bdevs": 3, 00:10:31.067 "num_base_bdevs_discovered": 3, 00:10:31.067 "num_base_bdevs_operational": 3, 00:10:31.067 "base_bdevs_list": [ 00:10:31.067 { 00:10:31.067 "name": "NewBaseBdev", 00:10:31.067 "uuid": "c4b749e5-42cf-11ef-96ac-773515fba644", 00:10:31.067 "is_configured": true, 00:10:31.067 "data_offset": 0, 00:10:31.067 "data_size": 65536 00:10:31.067 }, 00:10:31.067 { 00:10:31.067 "name": "BaseBdev2", 00:10:31.067 "uuid": "c29948cc-42cf-11ef-96ac-773515fba644", 00:10:31.067 "is_configured": true, 00:10:31.067 "data_offset": 0, 00:10:31.067 "data_size": 65536 00:10:31.067 }, 00:10:31.067 { 00:10:31.067 "name": "BaseBdev3", 00:10:31.067 "uuid": "c318dab0-42cf-11ef-96ac-773515fba644", 00:10:31.067 "is_configured": true, 00:10:31.067 "data_offset": 0, 00:10:31.067 "data_size": 65536 00:10:31.067 } 00:10:31.067 ] 00:10:31.067 }' 00:10:31.067 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:31.067 17:29:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.324 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:10:31.324 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:31.324 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:31.324 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:31.324 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:31.324 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:31.324 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:31.324 17:29:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:31.583 [2024-07-15 17:29:27.172505] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:31.583 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:31.583 "name": "Existed_Raid", 00:10:31.583 "aliases": [ 00:10:31.583 "c85f4a2e-42cf-11ef-96ac-773515fba644" 00:10:31.583 ], 00:10:31.583 "product_name": "Raid Volume", 00:10:31.583 "block_size": 512, 00:10:31.583 "num_blocks": 196608, 00:10:31.583 "uuid": "c85f4a2e-42cf-11ef-96ac-773515fba644", 00:10:31.583 "assigned_rate_limits": { 00:10:31.583 "rw_ios_per_sec": 0, 00:10:31.583 "rw_mbytes_per_sec": 0, 00:10:31.583 "r_mbytes_per_sec": 0, 00:10:31.583 "w_mbytes_per_sec": 0 00:10:31.583 }, 00:10:31.583 "claimed": false, 00:10:31.583 "zoned": false, 00:10:31.583 "supported_io_types": { 00:10:31.583 "read": true, 00:10:31.583 "write": true, 00:10:31.584 "unmap": true, 00:10:31.584 "flush": true, 00:10:31.584 "reset": true, 00:10:31.584 "nvme_admin": false, 00:10:31.584 "nvme_io": false, 00:10:31.584 "nvme_io_md": false, 00:10:31.584 "write_zeroes": true, 00:10:31.584 "zcopy": false, 00:10:31.584 "get_zone_info": false, 00:10:31.584 "zone_management": false, 00:10:31.584 "zone_append": false, 00:10:31.584 "compare": false, 00:10:31.584 "compare_and_write": false, 00:10:31.584 "abort": false, 00:10:31.584 "seek_hole": false, 00:10:31.584 "seek_data": false, 00:10:31.584 "copy": false, 00:10:31.584 "nvme_iov_md": false 00:10:31.584 }, 00:10:31.584 "memory_domains": [ 00:10:31.584 { 00:10:31.584 "dma_device_id": "system", 00:10:31.584 "dma_device_type": 1 00:10:31.584 }, 00:10:31.584 { 00:10:31.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.584 "dma_device_type": 2 00:10:31.584 }, 00:10:31.584 { 00:10:31.584 "dma_device_id": "system", 00:10:31.584 "dma_device_type": 1 00:10:31.584 }, 00:10:31.584 { 00:10:31.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.584 "dma_device_type": 2 00:10:31.584 }, 00:10:31.584 { 00:10:31.584 "dma_device_id": "system", 00:10:31.584 "dma_device_type": 1 00:10:31.584 }, 00:10:31.584 { 00:10:31.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.584 "dma_device_type": 2 00:10:31.584 } 00:10:31.584 ], 00:10:31.584 "driver_specific": { 00:10:31.584 "raid": { 00:10:31.584 "uuid": "c85f4a2e-42cf-11ef-96ac-773515fba644", 00:10:31.584 "strip_size_kb": 64, 00:10:31.584 "state": "online", 00:10:31.584 "raid_level": "concat", 00:10:31.584 "superblock": false, 00:10:31.584 "num_base_bdevs": 3, 00:10:31.585 "num_base_bdevs_discovered": 3, 00:10:31.585 "num_base_bdevs_operational": 3, 00:10:31.585 "base_bdevs_list": [ 00:10:31.585 { 00:10:31.585 "name": "NewBaseBdev", 00:10:31.585 "uuid": "c4b749e5-42cf-11ef-96ac-773515fba644", 00:10:31.585 "is_configured": true, 00:10:31.585 "data_offset": 0, 00:10:31.585 "data_size": 65536 00:10:31.585 }, 00:10:31.585 { 00:10:31.585 "name": "BaseBdev2", 00:10:31.585 "uuid": "c29948cc-42cf-11ef-96ac-773515fba644", 00:10:31.585 "is_configured": true, 00:10:31.585 "data_offset": 0, 00:10:31.585 "data_size": 65536 00:10:31.585 }, 00:10:31.585 { 00:10:31.585 "name": "BaseBdev3", 00:10:31.585 "uuid": "c318dab0-42cf-11ef-96ac-773515fba644", 00:10:31.585 "is_configured": true, 00:10:31.585 "data_offset": 0, 00:10:31.585 "data_size": 65536 00:10:31.585 } 00:10:31.585 ] 00:10:31.585 } 00:10:31.585 } 00:10:31.585 }' 00:10:31.585 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:31.585 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:10:31.585 BaseBdev2 00:10:31.585 BaseBdev3' 00:10:31.585 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:31.585 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:10:31.585 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:31.849 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:31.849 "name": "NewBaseBdev", 00:10:31.849 "aliases": [ 00:10:31.849 "c4b749e5-42cf-11ef-96ac-773515fba644" 00:10:31.849 ], 00:10:31.849 "product_name": "Malloc disk", 00:10:31.849 "block_size": 512, 00:10:31.849 "num_blocks": 65536, 00:10:31.849 "uuid": "c4b749e5-42cf-11ef-96ac-773515fba644", 00:10:31.849 "assigned_rate_limits": { 00:10:31.849 "rw_ios_per_sec": 0, 00:10:31.849 "rw_mbytes_per_sec": 0, 00:10:31.849 "r_mbytes_per_sec": 0, 00:10:31.849 "w_mbytes_per_sec": 0 00:10:31.849 }, 00:10:31.849 "claimed": true, 00:10:31.849 "claim_type": "exclusive_write", 00:10:31.849 "zoned": false, 00:10:31.849 "supported_io_types": { 00:10:31.849 "read": true, 00:10:31.849 "write": true, 00:10:31.849 "unmap": true, 00:10:31.849 "flush": true, 00:10:31.849 "reset": true, 00:10:31.849 "nvme_admin": false, 00:10:31.849 "nvme_io": false, 00:10:31.849 "nvme_io_md": false, 00:10:31.849 "write_zeroes": true, 00:10:31.849 "zcopy": true, 00:10:31.849 "get_zone_info": false, 00:10:31.849 "zone_management": false, 00:10:31.849 "zone_append": false, 00:10:31.849 "compare": false, 00:10:31.849 "compare_and_write": false, 00:10:31.849 "abort": true, 00:10:31.849 "seek_hole": false, 00:10:31.849 "seek_data": false, 00:10:31.849 "copy": true, 00:10:31.849 "nvme_iov_md": false 00:10:31.849 }, 00:10:31.849 "memory_domains": [ 00:10:31.849 { 00:10:31.849 "dma_device_id": "system", 00:10:31.849 "dma_device_type": 1 00:10:31.849 }, 00:10:31.849 { 00:10:31.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.849 "dma_device_type": 2 00:10:31.849 } 00:10:31.849 ], 00:10:31.849 "driver_specific": {} 00:10:31.849 }' 00:10:31.849 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:31.849 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:31.849 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:31.849 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:31.849 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:31.849 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:31.849 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:31.849 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:31.849 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:31.849 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:31.849 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:31.849 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:31.849 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:31.849 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:31.849 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:32.106 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:32.107 "name": "BaseBdev2", 00:10:32.107 "aliases": [ 00:10:32.107 "c29948cc-42cf-11ef-96ac-773515fba644" 00:10:32.107 ], 00:10:32.107 "product_name": "Malloc disk", 00:10:32.107 "block_size": 512, 00:10:32.107 "num_blocks": 65536, 00:10:32.107 "uuid": "c29948cc-42cf-11ef-96ac-773515fba644", 00:10:32.107 "assigned_rate_limits": { 00:10:32.107 "rw_ios_per_sec": 0, 00:10:32.107 "rw_mbytes_per_sec": 0, 00:10:32.107 "r_mbytes_per_sec": 0, 00:10:32.107 "w_mbytes_per_sec": 0 00:10:32.107 }, 00:10:32.107 "claimed": true, 00:10:32.107 "claim_type": "exclusive_write", 00:10:32.107 "zoned": false, 00:10:32.107 "supported_io_types": { 00:10:32.107 "read": true, 00:10:32.107 "write": true, 00:10:32.107 "unmap": true, 00:10:32.107 "flush": true, 00:10:32.107 "reset": true, 00:10:32.107 "nvme_admin": false, 00:10:32.107 "nvme_io": false, 00:10:32.107 "nvme_io_md": false, 00:10:32.107 "write_zeroes": true, 00:10:32.107 "zcopy": true, 00:10:32.107 "get_zone_info": false, 00:10:32.107 "zone_management": false, 00:10:32.107 "zone_append": false, 00:10:32.107 "compare": false, 00:10:32.107 "compare_and_write": false, 00:10:32.107 "abort": true, 00:10:32.107 "seek_hole": false, 00:10:32.107 "seek_data": false, 00:10:32.107 "copy": true, 00:10:32.107 "nvme_iov_md": false 00:10:32.107 }, 00:10:32.107 "memory_domains": [ 00:10:32.107 { 00:10:32.107 "dma_device_id": "system", 00:10:32.107 "dma_device_type": 1 00:10:32.107 }, 00:10:32.107 { 00:10:32.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.107 "dma_device_type": 2 00:10:32.107 } 00:10:32.107 ], 00:10:32.107 "driver_specific": {} 00:10:32.107 }' 00:10:32.107 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:32.107 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:32.107 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:32.107 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:32.107 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:32.107 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:32.107 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:32.107 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:32.107 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:32.107 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:32.107 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:32.107 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:32.107 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:32.107 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:10:32.107 17:29:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:32.364 17:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:32.364 "name": "BaseBdev3", 00:10:32.364 "aliases": [ 00:10:32.364 "c318dab0-42cf-11ef-96ac-773515fba644" 00:10:32.364 ], 00:10:32.364 "product_name": "Malloc disk", 00:10:32.364 "block_size": 512, 00:10:32.364 "num_blocks": 65536, 00:10:32.364 "uuid": "c318dab0-42cf-11ef-96ac-773515fba644", 00:10:32.364 "assigned_rate_limits": { 00:10:32.364 "rw_ios_per_sec": 0, 00:10:32.364 "rw_mbytes_per_sec": 0, 00:10:32.364 "r_mbytes_per_sec": 0, 00:10:32.364 "w_mbytes_per_sec": 0 00:10:32.364 }, 00:10:32.364 "claimed": true, 00:10:32.364 "claim_type": "exclusive_write", 00:10:32.364 "zoned": false, 00:10:32.364 "supported_io_types": { 00:10:32.364 "read": true, 00:10:32.364 "write": true, 00:10:32.364 "unmap": true, 00:10:32.364 "flush": true, 00:10:32.364 "reset": true, 00:10:32.364 "nvme_admin": false, 00:10:32.364 "nvme_io": false, 00:10:32.364 "nvme_io_md": false, 00:10:32.364 "write_zeroes": true, 00:10:32.364 "zcopy": true, 00:10:32.364 "get_zone_info": false, 00:10:32.364 "zone_management": false, 00:10:32.364 "zone_append": false, 00:10:32.364 "compare": false, 00:10:32.364 "compare_and_write": false, 00:10:32.364 "abort": true, 00:10:32.364 "seek_hole": false, 00:10:32.364 "seek_data": false, 00:10:32.364 "copy": true, 00:10:32.364 "nvme_iov_md": false 00:10:32.364 }, 00:10:32.364 "memory_domains": [ 00:10:32.364 { 00:10:32.364 "dma_device_id": "system", 00:10:32.364 "dma_device_type": 1 00:10:32.364 }, 00:10:32.364 { 00:10:32.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.364 "dma_device_type": 2 00:10:32.364 } 00:10:32.364 ], 00:10:32.364 "driver_specific": {} 00:10:32.364 }' 00:10:32.364 17:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:32.364 17:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:32.364 17:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:32.364 17:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:32.364 17:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:32.364 17:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:32.364 17:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:32.364 17:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:32.364 17:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:32.364 17:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:32.364 17:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:32.364 17:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:32.364 17:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:32.621 [2024-07-15 17:29:28.436474] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:32.621 [2024-07-15 17:29:28.436505] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:32.621 [2024-07-15 17:29:28.436537] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:32.621 [2024-07-15 17:29:28.436551] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:32.621 [2024-07-15 17:29:28.436556] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x101770a34a00 name Existed_Raid, state offline 00:10:32.878 17:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 54020 00:10:32.878 17:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 54020 ']' 00:10:32.878 17:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 54020 00:10:32.878 17:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:10:32.878 17:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:10:32.878 17:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 54020 00:10:32.878 17:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:10:32.878 17:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:10:32.878 17:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:10:32.878 killing process with pid 54020 00:10:32.878 17:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 54020' 00:10:32.878 17:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 54020 00:10:32.878 [2024-07-15 17:29:28.465198] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:32.878 17:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 54020 00:10:32.878 [2024-07-15 17:29:28.482525] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:32.878 17:29:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:10:32.878 00:10:32.878 real 0m23.802s 00:10:32.878 user 0m43.489s 00:10:32.878 sys 0m3.269s 00:10:32.878 17:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:32.878 17:29:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:32.878 ************************************ 00:10:32.878 END TEST raid_state_function_test 00:10:32.878 ************************************ 00:10:32.878 17:29:28 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:32.878 17:29:28 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:10:32.878 17:29:28 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:32.878 17:29:28 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:32.878 17:29:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:33.136 ************************************ 00:10:33.136 START TEST raid_state_function_test_sb 00:10:33.136 ************************************ 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 true 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=54749 00:10:33.136 Process raid pid: 54749 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 54749' 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 54749 /var/tmp/spdk-raid.sock 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 54749 ']' 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:33.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:33.136 17:29:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.136 [2024-07-15 17:29:28.727072] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:10:33.136 [2024-07-15 17:29:28.727361] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:10:33.700 EAL: TSC is not safe to use in SMP mode 00:10:33.700 EAL: TSC is not invariant 00:10:33.700 [2024-07-15 17:29:29.290900] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.700 [2024-07-15 17:29:29.381193] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:33.700 [2024-07-15 17:29:29.383352] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.700 [2024-07-15 17:29:29.384125] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.700 [2024-07-15 17:29:29.384140] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:34.266 17:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:34.266 17:29:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:10:34.266 17:29:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:34.266 [2024-07-15 17:29:30.092362] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:34.266 [2024-07-15 17:29:30.092419] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:34.266 [2024-07-15 17:29:30.092425] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:34.266 [2024-07-15 17:29:30.092434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:34.266 [2024-07-15 17:29:30.092437] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:34.266 [2024-07-15 17:29:30.092445] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:34.524 17:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:34.524 17:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:34.524 17:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:34.524 17:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:34.524 17:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:34.524 17:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:34.524 17:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:34.524 17:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:34.524 17:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:34.524 17:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:34.524 17:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:34.524 17:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.782 17:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:34.782 "name": "Existed_Raid", 00:10:34.782 "uuid": "cadd10a8-42cf-11ef-96ac-773515fba644", 00:10:34.782 "strip_size_kb": 64, 00:10:34.782 "state": "configuring", 00:10:34.782 "raid_level": "concat", 00:10:34.782 "superblock": true, 00:10:34.782 "num_base_bdevs": 3, 00:10:34.782 "num_base_bdevs_discovered": 0, 00:10:34.782 "num_base_bdevs_operational": 3, 00:10:34.782 "base_bdevs_list": [ 00:10:34.782 { 00:10:34.782 "name": "BaseBdev1", 00:10:34.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.782 "is_configured": false, 00:10:34.783 "data_offset": 0, 00:10:34.783 "data_size": 0 00:10:34.783 }, 00:10:34.783 { 00:10:34.783 "name": "BaseBdev2", 00:10:34.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.783 "is_configured": false, 00:10:34.783 "data_offset": 0, 00:10:34.783 "data_size": 0 00:10:34.783 }, 00:10:34.783 { 00:10:34.783 "name": "BaseBdev3", 00:10:34.783 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.783 "is_configured": false, 00:10:34.783 "data_offset": 0, 00:10:34.783 "data_size": 0 00:10:34.783 } 00:10:34.783 ] 00:10:34.783 }' 00:10:34.783 17:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:34.783 17:29:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.040 17:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:35.296 [2024-07-15 17:29:30.980353] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:35.296 [2024-07-15 17:29:30.980381] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25c078e34500 name Existed_Raid, state configuring 00:10:35.296 17:29:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:35.554 [2024-07-15 17:29:31.260369] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:35.554 [2024-07-15 17:29:31.260429] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:35.554 [2024-07-15 17:29:31.260434] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:35.554 [2024-07-15 17:29:31.260458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:35.554 [2024-07-15 17:29:31.260461] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:35.554 [2024-07-15 17:29:31.260468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:35.554 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:35.812 [2024-07-15 17:29:31.501442] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:35.813 BaseBdev1 00:10:35.813 17:29:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:35.813 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:10:35.813 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:35.813 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:35.813 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:35.813 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:35.813 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:36.071 17:29:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:36.329 [ 00:10:36.329 { 00:10:36.329 "name": "BaseBdev1", 00:10:36.329 "aliases": [ 00:10:36.329 "cbb3e92a-42cf-11ef-96ac-773515fba644" 00:10:36.329 ], 00:10:36.329 "product_name": "Malloc disk", 00:10:36.329 "block_size": 512, 00:10:36.329 "num_blocks": 65536, 00:10:36.329 "uuid": "cbb3e92a-42cf-11ef-96ac-773515fba644", 00:10:36.329 "assigned_rate_limits": { 00:10:36.329 "rw_ios_per_sec": 0, 00:10:36.329 "rw_mbytes_per_sec": 0, 00:10:36.329 "r_mbytes_per_sec": 0, 00:10:36.329 "w_mbytes_per_sec": 0 00:10:36.329 }, 00:10:36.329 "claimed": true, 00:10:36.329 "claim_type": "exclusive_write", 00:10:36.329 "zoned": false, 00:10:36.329 "supported_io_types": { 00:10:36.329 "read": true, 00:10:36.329 "write": true, 00:10:36.329 "unmap": true, 00:10:36.329 "flush": true, 00:10:36.329 "reset": true, 00:10:36.329 "nvme_admin": false, 00:10:36.329 "nvme_io": false, 00:10:36.329 "nvme_io_md": false, 00:10:36.329 "write_zeroes": true, 00:10:36.329 "zcopy": true, 00:10:36.329 "get_zone_info": false, 00:10:36.329 "zone_management": false, 00:10:36.329 "zone_append": false, 00:10:36.329 "compare": false, 00:10:36.329 "compare_and_write": false, 00:10:36.329 "abort": true, 00:10:36.329 "seek_hole": false, 00:10:36.329 "seek_data": false, 00:10:36.329 "copy": true, 00:10:36.329 "nvme_iov_md": false 00:10:36.330 }, 00:10:36.330 "memory_domains": [ 00:10:36.330 { 00:10:36.330 "dma_device_id": "system", 00:10:36.330 "dma_device_type": 1 00:10:36.330 }, 00:10:36.330 { 00:10:36.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.330 "dma_device_type": 2 00:10:36.330 } 00:10:36.330 ], 00:10:36.330 "driver_specific": {} 00:10:36.330 } 00:10:36.330 ] 00:10:36.330 17:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:36.330 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:36.330 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:36.330 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:36.330 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:36.330 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:36.330 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:36.330 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:36.330 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:36.330 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:36.330 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:36.330 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:36.330 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.588 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:36.588 "name": "Existed_Raid", 00:10:36.588 "uuid": "cb8f49ef-42cf-11ef-96ac-773515fba644", 00:10:36.588 "strip_size_kb": 64, 00:10:36.588 "state": "configuring", 00:10:36.588 "raid_level": "concat", 00:10:36.588 "superblock": true, 00:10:36.588 "num_base_bdevs": 3, 00:10:36.588 "num_base_bdevs_discovered": 1, 00:10:36.588 "num_base_bdevs_operational": 3, 00:10:36.588 "base_bdevs_list": [ 00:10:36.588 { 00:10:36.588 "name": "BaseBdev1", 00:10:36.588 "uuid": "cbb3e92a-42cf-11ef-96ac-773515fba644", 00:10:36.588 "is_configured": true, 00:10:36.588 "data_offset": 2048, 00:10:36.588 "data_size": 63488 00:10:36.588 }, 00:10:36.588 { 00:10:36.588 "name": "BaseBdev2", 00:10:36.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.588 "is_configured": false, 00:10:36.588 "data_offset": 0, 00:10:36.588 "data_size": 0 00:10:36.588 }, 00:10:36.588 { 00:10:36.588 "name": "BaseBdev3", 00:10:36.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.588 "is_configured": false, 00:10:36.588 "data_offset": 0, 00:10:36.588 "data_size": 0 00:10:36.588 } 00:10:36.588 ] 00:10:36.588 }' 00:10:36.588 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:36.588 17:29:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.155 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:37.155 [2024-07-15 17:29:32.940407] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:37.155 [2024-07-15 17:29:32.940440] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25c078e34500 name Existed_Raid, state configuring 00:10:37.155 17:29:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:37.414 [2024-07-15 17:29:33.172431] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:37.414 [2024-07-15 17:29:33.173304] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:37.414 [2024-07-15 17:29:33.173372] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:37.414 [2024-07-15 17:29:33.173377] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:37.414 [2024-07-15 17:29:33.173402] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:37.414 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:37.414 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:37.414 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:37.414 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:37.414 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:37.414 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:37.414 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:37.414 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:37.414 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:37.414 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:37.414 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:37.414 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:37.414 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:37.414 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.672 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:37.672 "name": "Existed_Raid", 00:10:37.672 "uuid": "ccb30bba-42cf-11ef-96ac-773515fba644", 00:10:37.672 "strip_size_kb": 64, 00:10:37.672 "state": "configuring", 00:10:37.672 "raid_level": "concat", 00:10:37.672 "superblock": true, 00:10:37.672 "num_base_bdevs": 3, 00:10:37.672 "num_base_bdevs_discovered": 1, 00:10:37.672 "num_base_bdevs_operational": 3, 00:10:37.672 "base_bdevs_list": [ 00:10:37.672 { 00:10:37.672 "name": "BaseBdev1", 00:10:37.672 "uuid": "cbb3e92a-42cf-11ef-96ac-773515fba644", 00:10:37.672 "is_configured": true, 00:10:37.672 "data_offset": 2048, 00:10:37.672 "data_size": 63488 00:10:37.672 }, 00:10:37.672 { 00:10:37.672 "name": "BaseBdev2", 00:10:37.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.672 "is_configured": false, 00:10:37.672 "data_offset": 0, 00:10:37.672 "data_size": 0 00:10:37.672 }, 00:10:37.672 { 00:10:37.672 "name": "BaseBdev3", 00:10:37.672 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.672 "is_configured": false, 00:10:37.672 "data_offset": 0, 00:10:37.672 "data_size": 0 00:10:37.672 } 00:10:37.672 ] 00:10:37.672 }' 00:10:37.672 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:37.672 17:29:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.304 17:29:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:38.304 [2024-07-15 17:29:34.080613] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:38.304 BaseBdev2 00:10:38.304 17:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:38.304 17:29:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:10:38.304 17:29:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:38.304 17:29:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:38.304 17:29:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:38.304 17:29:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:38.304 17:29:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:38.564 17:29:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:38.823 [ 00:10:38.823 { 00:10:38.823 "name": "BaseBdev2", 00:10:38.823 "aliases": [ 00:10:38.823 "cd3d99b2-42cf-11ef-96ac-773515fba644" 00:10:38.823 ], 00:10:38.823 "product_name": "Malloc disk", 00:10:38.823 "block_size": 512, 00:10:38.823 "num_blocks": 65536, 00:10:38.823 "uuid": "cd3d99b2-42cf-11ef-96ac-773515fba644", 00:10:38.823 "assigned_rate_limits": { 00:10:38.823 "rw_ios_per_sec": 0, 00:10:38.823 "rw_mbytes_per_sec": 0, 00:10:38.823 "r_mbytes_per_sec": 0, 00:10:38.823 "w_mbytes_per_sec": 0 00:10:38.823 }, 00:10:38.823 "claimed": true, 00:10:38.823 "claim_type": "exclusive_write", 00:10:38.823 "zoned": false, 00:10:38.823 "supported_io_types": { 00:10:38.823 "read": true, 00:10:38.823 "write": true, 00:10:38.823 "unmap": true, 00:10:38.824 "flush": true, 00:10:38.824 "reset": true, 00:10:38.824 "nvme_admin": false, 00:10:38.824 "nvme_io": false, 00:10:38.824 "nvme_io_md": false, 00:10:38.824 "write_zeroes": true, 00:10:38.824 "zcopy": true, 00:10:38.824 "get_zone_info": false, 00:10:38.824 "zone_management": false, 00:10:38.824 "zone_append": false, 00:10:38.824 "compare": false, 00:10:38.824 "compare_and_write": false, 00:10:38.824 "abort": true, 00:10:38.824 "seek_hole": false, 00:10:38.824 "seek_data": false, 00:10:38.824 "copy": true, 00:10:38.824 "nvme_iov_md": false 00:10:38.824 }, 00:10:38.824 "memory_domains": [ 00:10:38.824 { 00:10:38.824 "dma_device_id": "system", 00:10:38.824 "dma_device_type": 1 00:10:38.824 }, 00:10:38.824 { 00:10:38.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.824 "dma_device_type": 2 00:10:38.824 } 00:10:38.824 ], 00:10:38.824 "driver_specific": {} 00:10:38.824 } 00:10:38.824 ] 00:10:38.824 17:29:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:38.824 17:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:38.824 17:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:38.824 17:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:38.824 17:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:38.824 17:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:38.824 17:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:38.824 17:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:38.824 17:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:38.824 17:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:38.824 17:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:38.824 17:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:38.824 17:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:38.824 17:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:38.824 17:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.082 17:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:39.082 "name": "Existed_Raid", 00:10:39.082 "uuid": "ccb30bba-42cf-11ef-96ac-773515fba644", 00:10:39.082 "strip_size_kb": 64, 00:10:39.082 "state": "configuring", 00:10:39.082 "raid_level": "concat", 00:10:39.082 "superblock": true, 00:10:39.082 "num_base_bdevs": 3, 00:10:39.082 "num_base_bdevs_discovered": 2, 00:10:39.082 "num_base_bdevs_operational": 3, 00:10:39.082 "base_bdevs_list": [ 00:10:39.082 { 00:10:39.082 "name": "BaseBdev1", 00:10:39.082 "uuid": "cbb3e92a-42cf-11ef-96ac-773515fba644", 00:10:39.082 "is_configured": true, 00:10:39.082 "data_offset": 2048, 00:10:39.082 "data_size": 63488 00:10:39.082 }, 00:10:39.082 { 00:10:39.082 "name": "BaseBdev2", 00:10:39.082 "uuid": "cd3d99b2-42cf-11ef-96ac-773515fba644", 00:10:39.082 "is_configured": true, 00:10:39.082 "data_offset": 2048, 00:10:39.082 "data_size": 63488 00:10:39.082 }, 00:10:39.082 { 00:10:39.082 "name": "BaseBdev3", 00:10:39.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.082 "is_configured": false, 00:10:39.082 "data_offset": 0, 00:10:39.082 "data_size": 0 00:10:39.082 } 00:10:39.082 ] 00:10:39.082 }' 00:10:39.082 17:29:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:39.082 17:29:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.354 17:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:39.617 [2024-07-15 17:29:35.380635] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:39.617 [2024-07-15 17:29:35.380694] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x25c078e34a00 00:10:39.617 [2024-07-15 17:29:35.380700] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:39.617 [2024-07-15 17:29:35.380721] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25c078e97e20 00:10:39.617 [2024-07-15 17:29:35.380772] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x25c078e34a00 00:10:39.617 [2024-07-15 17:29:35.380776] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x25c078e34a00 00:10:39.617 [2024-07-15 17:29:35.380801] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.617 BaseBdev3 00:10:39.617 17:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:10:39.617 17:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:10:39.617 17:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:39.617 17:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:39.617 17:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:39.617 17:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:39.617 17:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:39.874 17:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:40.133 [ 00:10:40.133 { 00:10:40.133 "name": "BaseBdev3", 00:10:40.133 "aliases": [ 00:10:40.133 "ce03f88d-42cf-11ef-96ac-773515fba644" 00:10:40.133 ], 00:10:40.133 "product_name": "Malloc disk", 00:10:40.133 "block_size": 512, 00:10:40.133 "num_blocks": 65536, 00:10:40.133 "uuid": "ce03f88d-42cf-11ef-96ac-773515fba644", 00:10:40.133 "assigned_rate_limits": { 00:10:40.133 "rw_ios_per_sec": 0, 00:10:40.133 "rw_mbytes_per_sec": 0, 00:10:40.133 "r_mbytes_per_sec": 0, 00:10:40.133 "w_mbytes_per_sec": 0 00:10:40.133 }, 00:10:40.133 "claimed": true, 00:10:40.133 "claim_type": "exclusive_write", 00:10:40.133 "zoned": false, 00:10:40.133 "supported_io_types": { 00:10:40.133 "read": true, 00:10:40.133 "write": true, 00:10:40.133 "unmap": true, 00:10:40.133 "flush": true, 00:10:40.133 "reset": true, 00:10:40.133 "nvme_admin": false, 00:10:40.133 "nvme_io": false, 00:10:40.133 "nvme_io_md": false, 00:10:40.133 "write_zeroes": true, 00:10:40.133 "zcopy": true, 00:10:40.133 "get_zone_info": false, 00:10:40.133 "zone_management": false, 00:10:40.133 "zone_append": false, 00:10:40.133 "compare": false, 00:10:40.133 "compare_and_write": false, 00:10:40.133 "abort": true, 00:10:40.133 "seek_hole": false, 00:10:40.133 "seek_data": false, 00:10:40.133 "copy": true, 00:10:40.133 "nvme_iov_md": false 00:10:40.133 }, 00:10:40.133 "memory_domains": [ 00:10:40.133 { 00:10:40.133 "dma_device_id": "system", 00:10:40.133 "dma_device_type": 1 00:10:40.133 }, 00:10:40.133 { 00:10:40.133 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.133 "dma_device_type": 2 00:10:40.133 } 00:10:40.133 ], 00:10:40.133 "driver_specific": {} 00:10:40.133 } 00:10:40.133 ] 00:10:40.133 17:29:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:40.133 17:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:40.133 17:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:40.133 17:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:40.133 17:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:40.133 17:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:40.133 17:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:40.133 17:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:40.133 17:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:40.133 17:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:40.133 17:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:40.133 17:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:40.133 17:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:40.133 17:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.133 17:29:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:40.392 17:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:40.392 "name": "Existed_Raid", 00:10:40.392 "uuid": "ccb30bba-42cf-11ef-96ac-773515fba644", 00:10:40.392 "strip_size_kb": 64, 00:10:40.392 "state": "online", 00:10:40.392 "raid_level": "concat", 00:10:40.392 "superblock": true, 00:10:40.392 "num_base_bdevs": 3, 00:10:40.392 "num_base_bdevs_discovered": 3, 00:10:40.392 "num_base_bdevs_operational": 3, 00:10:40.392 "base_bdevs_list": [ 00:10:40.392 { 00:10:40.392 "name": "BaseBdev1", 00:10:40.392 "uuid": "cbb3e92a-42cf-11ef-96ac-773515fba644", 00:10:40.392 "is_configured": true, 00:10:40.392 "data_offset": 2048, 00:10:40.392 "data_size": 63488 00:10:40.392 }, 00:10:40.392 { 00:10:40.392 "name": "BaseBdev2", 00:10:40.392 "uuid": "cd3d99b2-42cf-11ef-96ac-773515fba644", 00:10:40.392 "is_configured": true, 00:10:40.392 "data_offset": 2048, 00:10:40.392 "data_size": 63488 00:10:40.392 }, 00:10:40.392 { 00:10:40.392 "name": "BaseBdev3", 00:10:40.392 "uuid": "ce03f88d-42cf-11ef-96ac-773515fba644", 00:10:40.392 "is_configured": true, 00:10:40.392 "data_offset": 2048, 00:10:40.392 "data_size": 63488 00:10:40.392 } 00:10:40.392 ] 00:10:40.392 }' 00:10:40.392 17:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:40.392 17:29:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.650 17:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:40.650 17:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:40.650 17:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:40.650 17:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:40.650 17:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:40.650 17:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:10:40.650 17:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:40.650 17:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:40.908 [2024-07-15 17:29:36.732575] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:41.167 17:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:41.167 "name": "Existed_Raid", 00:10:41.167 "aliases": [ 00:10:41.167 "ccb30bba-42cf-11ef-96ac-773515fba644" 00:10:41.167 ], 00:10:41.167 "product_name": "Raid Volume", 00:10:41.167 "block_size": 512, 00:10:41.167 "num_blocks": 190464, 00:10:41.167 "uuid": "ccb30bba-42cf-11ef-96ac-773515fba644", 00:10:41.167 "assigned_rate_limits": { 00:10:41.167 "rw_ios_per_sec": 0, 00:10:41.167 "rw_mbytes_per_sec": 0, 00:10:41.167 "r_mbytes_per_sec": 0, 00:10:41.167 "w_mbytes_per_sec": 0 00:10:41.167 }, 00:10:41.167 "claimed": false, 00:10:41.167 "zoned": false, 00:10:41.167 "supported_io_types": { 00:10:41.167 "read": true, 00:10:41.167 "write": true, 00:10:41.167 "unmap": true, 00:10:41.167 "flush": true, 00:10:41.167 "reset": true, 00:10:41.167 "nvme_admin": false, 00:10:41.167 "nvme_io": false, 00:10:41.167 "nvme_io_md": false, 00:10:41.167 "write_zeroes": true, 00:10:41.167 "zcopy": false, 00:10:41.167 "get_zone_info": false, 00:10:41.167 "zone_management": false, 00:10:41.167 "zone_append": false, 00:10:41.167 "compare": false, 00:10:41.167 "compare_and_write": false, 00:10:41.167 "abort": false, 00:10:41.167 "seek_hole": false, 00:10:41.167 "seek_data": false, 00:10:41.167 "copy": false, 00:10:41.167 "nvme_iov_md": false 00:10:41.167 }, 00:10:41.167 "memory_domains": [ 00:10:41.167 { 00:10:41.167 "dma_device_id": "system", 00:10:41.167 "dma_device_type": 1 00:10:41.167 }, 00:10:41.167 { 00:10:41.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.167 "dma_device_type": 2 00:10:41.167 }, 00:10:41.167 { 00:10:41.167 "dma_device_id": "system", 00:10:41.167 "dma_device_type": 1 00:10:41.167 }, 00:10:41.167 { 00:10:41.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.167 "dma_device_type": 2 00:10:41.167 }, 00:10:41.167 { 00:10:41.167 "dma_device_id": "system", 00:10:41.167 "dma_device_type": 1 00:10:41.167 }, 00:10:41.167 { 00:10:41.167 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.167 "dma_device_type": 2 00:10:41.167 } 00:10:41.167 ], 00:10:41.167 "driver_specific": { 00:10:41.167 "raid": { 00:10:41.167 "uuid": "ccb30bba-42cf-11ef-96ac-773515fba644", 00:10:41.167 "strip_size_kb": 64, 00:10:41.167 "state": "online", 00:10:41.167 "raid_level": "concat", 00:10:41.167 "superblock": true, 00:10:41.167 "num_base_bdevs": 3, 00:10:41.167 "num_base_bdevs_discovered": 3, 00:10:41.167 "num_base_bdevs_operational": 3, 00:10:41.167 "base_bdevs_list": [ 00:10:41.167 { 00:10:41.167 "name": "BaseBdev1", 00:10:41.167 "uuid": "cbb3e92a-42cf-11ef-96ac-773515fba644", 00:10:41.167 "is_configured": true, 00:10:41.167 "data_offset": 2048, 00:10:41.167 "data_size": 63488 00:10:41.167 }, 00:10:41.167 { 00:10:41.167 "name": "BaseBdev2", 00:10:41.168 "uuid": "cd3d99b2-42cf-11ef-96ac-773515fba644", 00:10:41.168 "is_configured": true, 00:10:41.168 "data_offset": 2048, 00:10:41.168 "data_size": 63488 00:10:41.168 }, 00:10:41.168 { 00:10:41.168 "name": "BaseBdev3", 00:10:41.168 "uuid": "ce03f88d-42cf-11ef-96ac-773515fba644", 00:10:41.168 "is_configured": true, 00:10:41.168 "data_offset": 2048, 00:10:41.168 "data_size": 63488 00:10:41.168 } 00:10:41.168 ] 00:10:41.168 } 00:10:41.168 } 00:10:41.168 }' 00:10:41.168 17:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:41.168 17:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:41.168 BaseBdev2 00:10:41.168 BaseBdev3' 00:10:41.168 17:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:41.168 17:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:41.168 17:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:41.168 17:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:41.168 "name": "BaseBdev1", 00:10:41.168 "aliases": [ 00:10:41.168 "cbb3e92a-42cf-11ef-96ac-773515fba644" 00:10:41.168 ], 00:10:41.168 "product_name": "Malloc disk", 00:10:41.168 "block_size": 512, 00:10:41.168 "num_blocks": 65536, 00:10:41.168 "uuid": "cbb3e92a-42cf-11ef-96ac-773515fba644", 00:10:41.168 "assigned_rate_limits": { 00:10:41.168 "rw_ios_per_sec": 0, 00:10:41.168 "rw_mbytes_per_sec": 0, 00:10:41.168 "r_mbytes_per_sec": 0, 00:10:41.168 "w_mbytes_per_sec": 0 00:10:41.168 }, 00:10:41.168 "claimed": true, 00:10:41.168 "claim_type": "exclusive_write", 00:10:41.168 "zoned": false, 00:10:41.168 "supported_io_types": { 00:10:41.168 "read": true, 00:10:41.168 "write": true, 00:10:41.168 "unmap": true, 00:10:41.168 "flush": true, 00:10:41.168 "reset": true, 00:10:41.168 "nvme_admin": false, 00:10:41.168 "nvme_io": false, 00:10:41.168 "nvme_io_md": false, 00:10:41.168 "write_zeroes": true, 00:10:41.168 "zcopy": true, 00:10:41.168 "get_zone_info": false, 00:10:41.168 "zone_management": false, 00:10:41.168 "zone_append": false, 00:10:41.168 "compare": false, 00:10:41.168 "compare_and_write": false, 00:10:41.168 "abort": true, 00:10:41.168 "seek_hole": false, 00:10:41.168 "seek_data": false, 00:10:41.168 "copy": true, 00:10:41.168 "nvme_iov_md": false 00:10:41.168 }, 00:10:41.168 "memory_domains": [ 00:10:41.168 { 00:10:41.168 "dma_device_id": "system", 00:10:41.168 "dma_device_type": 1 00:10:41.168 }, 00:10:41.168 { 00:10:41.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.168 "dma_device_type": 2 00:10:41.168 } 00:10:41.168 ], 00:10:41.168 "driver_specific": {} 00:10:41.168 }' 00:10:41.427 17:29:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:41.427 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:41.427 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:41.427 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:41.427 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:41.427 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:41.427 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:41.427 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:41.427 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:41.427 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:41.427 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:41.427 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:41.427 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:41.427 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:41.427 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:41.685 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:41.685 "name": "BaseBdev2", 00:10:41.685 "aliases": [ 00:10:41.685 "cd3d99b2-42cf-11ef-96ac-773515fba644" 00:10:41.685 ], 00:10:41.685 "product_name": "Malloc disk", 00:10:41.685 "block_size": 512, 00:10:41.685 "num_blocks": 65536, 00:10:41.685 "uuid": "cd3d99b2-42cf-11ef-96ac-773515fba644", 00:10:41.685 "assigned_rate_limits": { 00:10:41.685 "rw_ios_per_sec": 0, 00:10:41.685 "rw_mbytes_per_sec": 0, 00:10:41.685 "r_mbytes_per_sec": 0, 00:10:41.685 "w_mbytes_per_sec": 0 00:10:41.685 }, 00:10:41.685 "claimed": true, 00:10:41.685 "claim_type": "exclusive_write", 00:10:41.685 "zoned": false, 00:10:41.685 "supported_io_types": { 00:10:41.685 "read": true, 00:10:41.685 "write": true, 00:10:41.685 "unmap": true, 00:10:41.685 "flush": true, 00:10:41.685 "reset": true, 00:10:41.685 "nvme_admin": false, 00:10:41.685 "nvme_io": false, 00:10:41.685 "nvme_io_md": false, 00:10:41.685 "write_zeroes": true, 00:10:41.685 "zcopy": true, 00:10:41.685 "get_zone_info": false, 00:10:41.685 "zone_management": false, 00:10:41.685 "zone_append": false, 00:10:41.685 "compare": false, 00:10:41.685 "compare_and_write": false, 00:10:41.685 "abort": true, 00:10:41.685 "seek_hole": false, 00:10:41.685 "seek_data": false, 00:10:41.685 "copy": true, 00:10:41.685 "nvme_iov_md": false 00:10:41.685 }, 00:10:41.685 "memory_domains": [ 00:10:41.685 { 00:10:41.685 "dma_device_id": "system", 00:10:41.685 "dma_device_type": 1 00:10:41.685 }, 00:10:41.685 { 00:10:41.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.685 "dma_device_type": 2 00:10:41.685 } 00:10:41.685 ], 00:10:41.685 "driver_specific": {} 00:10:41.685 }' 00:10:41.685 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:41.685 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:41.685 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:41.685 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:41.685 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:41.685 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:41.685 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:41.685 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:41.685 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:41.685 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:41.685 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:41.685 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:41.685 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:41.685 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:10:41.685 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:41.944 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:41.944 "name": "BaseBdev3", 00:10:41.944 "aliases": [ 00:10:41.944 "ce03f88d-42cf-11ef-96ac-773515fba644" 00:10:41.944 ], 00:10:41.944 "product_name": "Malloc disk", 00:10:41.944 "block_size": 512, 00:10:41.944 "num_blocks": 65536, 00:10:41.944 "uuid": "ce03f88d-42cf-11ef-96ac-773515fba644", 00:10:41.944 "assigned_rate_limits": { 00:10:41.944 "rw_ios_per_sec": 0, 00:10:41.944 "rw_mbytes_per_sec": 0, 00:10:41.944 "r_mbytes_per_sec": 0, 00:10:41.944 "w_mbytes_per_sec": 0 00:10:41.944 }, 00:10:41.944 "claimed": true, 00:10:41.944 "claim_type": "exclusive_write", 00:10:41.944 "zoned": false, 00:10:41.944 "supported_io_types": { 00:10:41.944 "read": true, 00:10:41.944 "write": true, 00:10:41.944 "unmap": true, 00:10:41.944 "flush": true, 00:10:41.944 "reset": true, 00:10:41.944 "nvme_admin": false, 00:10:41.944 "nvme_io": false, 00:10:41.944 "nvme_io_md": false, 00:10:41.944 "write_zeroes": true, 00:10:41.944 "zcopy": true, 00:10:41.944 "get_zone_info": false, 00:10:41.944 "zone_management": false, 00:10:41.944 "zone_append": false, 00:10:41.944 "compare": false, 00:10:41.944 "compare_and_write": false, 00:10:41.944 "abort": true, 00:10:41.944 "seek_hole": false, 00:10:41.944 "seek_data": false, 00:10:41.944 "copy": true, 00:10:41.944 "nvme_iov_md": false 00:10:41.944 }, 00:10:41.944 "memory_domains": [ 00:10:41.944 { 00:10:41.944 "dma_device_id": "system", 00:10:41.944 "dma_device_type": 1 00:10:41.944 }, 00:10:41.944 { 00:10:41.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.944 "dma_device_type": 2 00:10:41.944 } 00:10:41.944 ], 00:10:41.944 "driver_specific": {} 00:10:41.944 }' 00:10:41.944 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:41.944 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:41.944 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:41.944 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:41.944 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:41.944 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:41.944 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:41.944 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:41.944 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:41.944 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:41.944 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:41.944 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:41.944 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:42.204 [2024-07-15 17:29:37.960612] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:42.204 [2024-07-15 17:29:37.960635] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:42.204 [2024-07-15 17:29:37.960648] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:42.204 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:42.204 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:10:42.204 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:42.204 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:10:42.204 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:10:42.204 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:42.204 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:42.204 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:10:42.204 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:42.204 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:42.204 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:42.204 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:42.204 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:42.204 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:42.204 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:42.204 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:42.204 17:29:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.463 17:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:42.463 "name": "Existed_Raid", 00:10:42.463 "uuid": "ccb30bba-42cf-11ef-96ac-773515fba644", 00:10:42.463 "strip_size_kb": 64, 00:10:42.463 "state": "offline", 00:10:42.463 "raid_level": "concat", 00:10:42.463 "superblock": true, 00:10:42.463 "num_base_bdevs": 3, 00:10:42.463 "num_base_bdevs_discovered": 2, 00:10:42.463 "num_base_bdevs_operational": 2, 00:10:42.463 "base_bdevs_list": [ 00:10:42.463 { 00:10:42.463 "name": null, 00:10:42.463 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.463 "is_configured": false, 00:10:42.463 "data_offset": 2048, 00:10:42.463 "data_size": 63488 00:10:42.463 }, 00:10:42.463 { 00:10:42.463 "name": "BaseBdev2", 00:10:42.463 "uuid": "cd3d99b2-42cf-11ef-96ac-773515fba644", 00:10:42.463 "is_configured": true, 00:10:42.463 "data_offset": 2048, 00:10:42.463 "data_size": 63488 00:10:42.463 }, 00:10:42.463 { 00:10:42.463 "name": "BaseBdev3", 00:10:42.463 "uuid": "ce03f88d-42cf-11ef-96ac-773515fba644", 00:10:42.463 "is_configured": true, 00:10:42.463 "data_offset": 2048, 00:10:42.463 "data_size": 63488 00:10:42.463 } 00:10:42.463 ] 00:10:42.463 }' 00:10:42.463 17:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:42.463 17:29:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.722 17:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:42.722 17:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:42.722 17:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:42.722 17:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:42.980 17:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:42.980 17:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:42.980 17:29:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:43.239 [2024-07-15 17:29:39.002685] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:43.239 17:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:43.239 17:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:43.239 17:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:43.239 17:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:43.497 17:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:43.497 17:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:43.497 17:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:10:43.755 [2024-07-15 17:29:39.517175] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:43.755 [2024-07-15 17:29:39.517222] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25c078e34a00 name Existed_Raid, state offline 00:10:43.755 17:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:43.755 17:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:43.755 17:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:43.755 17:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:44.322 17:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:44.322 17:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:44.322 17:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:10:44.322 17:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:10:44.322 17:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:44.322 17:29:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:44.322 BaseBdev2 00:10:44.594 17:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:10:44.594 17:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:10:44.594 17:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:44.594 17:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:44.594 17:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:44.594 17:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:44.595 17:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:44.595 17:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:45.161 [ 00:10:45.161 { 00:10:45.161 "name": "BaseBdev2", 00:10:45.161 "aliases": [ 00:10:45.161 "d0db6561-42cf-11ef-96ac-773515fba644" 00:10:45.161 ], 00:10:45.161 "product_name": "Malloc disk", 00:10:45.161 "block_size": 512, 00:10:45.161 "num_blocks": 65536, 00:10:45.161 "uuid": "d0db6561-42cf-11ef-96ac-773515fba644", 00:10:45.161 "assigned_rate_limits": { 00:10:45.161 "rw_ios_per_sec": 0, 00:10:45.161 "rw_mbytes_per_sec": 0, 00:10:45.161 "r_mbytes_per_sec": 0, 00:10:45.161 "w_mbytes_per_sec": 0 00:10:45.161 }, 00:10:45.161 "claimed": false, 00:10:45.161 "zoned": false, 00:10:45.161 "supported_io_types": { 00:10:45.161 "read": true, 00:10:45.161 "write": true, 00:10:45.161 "unmap": true, 00:10:45.161 "flush": true, 00:10:45.161 "reset": true, 00:10:45.161 "nvme_admin": false, 00:10:45.161 "nvme_io": false, 00:10:45.161 "nvme_io_md": false, 00:10:45.161 "write_zeroes": true, 00:10:45.161 "zcopy": true, 00:10:45.161 "get_zone_info": false, 00:10:45.161 "zone_management": false, 00:10:45.161 "zone_append": false, 00:10:45.161 "compare": false, 00:10:45.161 "compare_and_write": false, 00:10:45.161 "abort": true, 00:10:45.161 "seek_hole": false, 00:10:45.161 "seek_data": false, 00:10:45.161 "copy": true, 00:10:45.161 "nvme_iov_md": false 00:10:45.161 }, 00:10:45.161 "memory_domains": [ 00:10:45.161 { 00:10:45.161 "dma_device_id": "system", 00:10:45.161 "dma_device_type": 1 00:10:45.161 }, 00:10:45.161 { 00:10:45.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.161 "dma_device_type": 2 00:10:45.161 } 00:10:45.161 ], 00:10:45.161 "driver_specific": {} 00:10:45.161 } 00:10:45.161 ] 00:10:45.161 17:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:45.161 17:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:10:45.161 17:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:45.161 17:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:45.161 BaseBdev3 00:10:45.161 17:29:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:10:45.161 17:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:10:45.161 17:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:45.161 17:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:45.161 17:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:45.161 17:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:45.161 17:29:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:45.419 17:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:45.679 [ 00:10:45.679 { 00:10:45.679 "name": "BaseBdev3", 00:10:45.679 "aliases": [ 00:10:45.679 "d15306f9-42cf-11ef-96ac-773515fba644" 00:10:45.679 ], 00:10:45.679 "product_name": "Malloc disk", 00:10:45.679 "block_size": 512, 00:10:45.679 "num_blocks": 65536, 00:10:45.679 "uuid": "d15306f9-42cf-11ef-96ac-773515fba644", 00:10:45.679 "assigned_rate_limits": { 00:10:45.679 "rw_ios_per_sec": 0, 00:10:45.679 "rw_mbytes_per_sec": 0, 00:10:45.679 "r_mbytes_per_sec": 0, 00:10:45.679 "w_mbytes_per_sec": 0 00:10:45.679 }, 00:10:45.679 "claimed": false, 00:10:45.679 "zoned": false, 00:10:45.679 "supported_io_types": { 00:10:45.679 "read": true, 00:10:45.679 "write": true, 00:10:45.679 "unmap": true, 00:10:45.679 "flush": true, 00:10:45.679 "reset": true, 00:10:45.679 "nvme_admin": false, 00:10:45.679 "nvme_io": false, 00:10:45.679 "nvme_io_md": false, 00:10:45.679 "write_zeroes": true, 00:10:45.679 "zcopy": true, 00:10:45.679 "get_zone_info": false, 00:10:45.679 "zone_management": false, 00:10:45.679 "zone_append": false, 00:10:45.679 "compare": false, 00:10:45.679 "compare_and_write": false, 00:10:45.679 "abort": true, 00:10:45.679 "seek_hole": false, 00:10:45.679 "seek_data": false, 00:10:45.679 "copy": true, 00:10:45.679 "nvme_iov_md": false 00:10:45.679 }, 00:10:45.679 "memory_domains": [ 00:10:45.679 { 00:10:45.679 "dma_device_id": "system", 00:10:45.679 "dma_device_type": 1 00:10:45.679 }, 00:10:45.679 { 00:10:45.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.679 "dma_device_type": 2 00:10:45.679 } 00:10:45.679 ], 00:10:45.679 "driver_specific": {} 00:10:45.679 } 00:10:45.679 ] 00:10:45.939 17:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:45.939 17:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:10:45.939 17:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:45.939 17:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:45.939 [2024-07-15 17:29:41.731805] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:45.939 [2024-07-15 17:29:41.731854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:45.939 [2024-07-15 17:29:41.731864] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:45.939 [2024-07-15 17:29:41.732582] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:45.939 17:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:45.939 17:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:45.939 17:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:45.939 17:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:45.939 17:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:45.939 17:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:45.939 17:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:45.939 17:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:45.939 17:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:45.939 17:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:45.939 17:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:45.939 17:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.199 17:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:46.199 "name": "Existed_Raid", 00:10:46.199 "uuid": "d1cd1a6f-42cf-11ef-96ac-773515fba644", 00:10:46.199 "strip_size_kb": 64, 00:10:46.199 "state": "configuring", 00:10:46.199 "raid_level": "concat", 00:10:46.199 "superblock": true, 00:10:46.199 "num_base_bdevs": 3, 00:10:46.199 "num_base_bdevs_discovered": 2, 00:10:46.199 "num_base_bdevs_operational": 3, 00:10:46.199 "base_bdevs_list": [ 00:10:46.199 { 00:10:46.199 "name": "BaseBdev1", 00:10:46.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.199 "is_configured": false, 00:10:46.199 "data_offset": 0, 00:10:46.199 "data_size": 0 00:10:46.199 }, 00:10:46.199 { 00:10:46.199 "name": "BaseBdev2", 00:10:46.199 "uuid": "d0db6561-42cf-11ef-96ac-773515fba644", 00:10:46.199 "is_configured": true, 00:10:46.199 "data_offset": 2048, 00:10:46.199 "data_size": 63488 00:10:46.199 }, 00:10:46.199 { 00:10:46.199 "name": "BaseBdev3", 00:10:46.199 "uuid": "d15306f9-42cf-11ef-96ac-773515fba644", 00:10:46.199 "is_configured": true, 00:10:46.199 "data_offset": 2048, 00:10:46.199 "data_size": 63488 00:10:46.199 } 00:10:46.199 ] 00:10:46.199 }' 00:10:46.199 17:29:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:46.199 17:29:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.457 17:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:10:46.715 [2024-07-15 17:29:42.483820] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:46.715 17:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:46.715 17:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:46.715 17:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:46.715 17:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:46.715 17:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:46.715 17:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:46.715 17:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:46.715 17:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:46.715 17:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:46.715 17:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:46.715 17:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.715 17:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:46.973 17:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:46.973 "name": "Existed_Raid", 00:10:46.973 "uuid": "d1cd1a6f-42cf-11ef-96ac-773515fba644", 00:10:46.973 "strip_size_kb": 64, 00:10:46.973 "state": "configuring", 00:10:46.973 "raid_level": "concat", 00:10:46.973 "superblock": true, 00:10:46.973 "num_base_bdevs": 3, 00:10:46.973 "num_base_bdevs_discovered": 1, 00:10:46.973 "num_base_bdevs_operational": 3, 00:10:46.973 "base_bdevs_list": [ 00:10:46.973 { 00:10:46.973 "name": "BaseBdev1", 00:10:46.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.973 "is_configured": false, 00:10:46.973 "data_offset": 0, 00:10:46.973 "data_size": 0 00:10:46.973 }, 00:10:46.973 { 00:10:46.973 "name": null, 00:10:46.973 "uuid": "d0db6561-42cf-11ef-96ac-773515fba644", 00:10:46.973 "is_configured": false, 00:10:46.973 "data_offset": 2048, 00:10:46.973 "data_size": 63488 00:10:46.973 }, 00:10:46.973 { 00:10:46.973 "name": "BaseBdev3", 00:10:46.973 "uuid": "d15306f9-42cf-11ef-96ac-773515fba644", 00:10:46.973 "is_configured": true, 00:10:46.973 "data_offset": 2048, 00:10:46.973 "data_size": 63488 00:10:46.973 } 00:10:46.973 ] 00:10:46.973 }' 00:10:46.974 17:29:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:46.974 17:29:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.231 17:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:47.231 17:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:47.796 17:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:10:47.796 17:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:47.796 [2024-07-15 17:29:43.603986] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:47.796 BaseBdev1 00:10:47.796 17:29:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:10:47.796 17:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:10:47.796 17:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:47.796 17:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:47.796 17:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:47.796 17:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:47.796 17:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:48.054 17:29:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:48.312 [ 00:10:48.312 { 00:10:48.312 "name": "BaseBdev1", 00:10:48.312 "aliases": [ 00:10:48.312 "d2eac189-42cf-11ef-96ac-773515fba644" 00:10:48.312 ], 00:10:48.312 "product_name": "Malloc disk", 00:10:48.312 "block_size": 512, 00:10:48.312 "num_blocks": 65536, 00:10:48.312 "uuid": "d2eac189-42cf-11ef-96ac-773515fba644", 00:10:48.312 "assigned_rate_limits": { 00:10:48.312 "rw_ios_per_sec": 0, 00:10:48.312 "rw_mbytes_per_sec": 0, 00:10:48.312 "r_mbytes_per_sec": 0, 00:10:48.312 "w_mbytes_per_sec": 0 00:10:48.312 }, 00:10:48.312 "claimed": true, 00:10:48.312 "claim_type": "exclusive_write", 00:10:48.312 "zoned": false, 00:10:48.312 "supported_io_types": { 00:10:48.312 "read": true, 00:10:48.312 "write": true, 00:10:48.312 "unmap": true, 00:10:48.312 "flush": true, 00:10:48.312 "reset": true, 00:10:48.312 "nvme_admin": false, 00:10:48.312 "nvme_io": false, 00:10:48.312 "nvme_io_md": false, 00:10:48.312 "write_zeroes": true, 00:10:48.312 "zcopy": true, 00:10:48.312 "get_zone_info": false, 00:10:48.312 "zone_management": false, 00:10:48.312 "zone_append": false, 00:10:48.312 "compare": false, 00:10:48.312 "compare_and_write": false, 00:10:48.312 "abort": true, 00:10:48.312 "seek_hole": false, 00:10:48.312 "seek_data": false, 00:10:48.312 "copy": true, 00:10:48.312 "nvme_iov_md": false 00:10:48.312 }, 00:10:48.312 "memory_domains": [ 00:10:48.312 { 00:10:48.312 "dma_device_id": "system", 00:10:48.312 "dma_device_type": 1 00:10:48.312 }, 00:10:48.312 { 00:10:48.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.312 "dma_device_type": 2 00:10:48.312 } 00:10:48.312 ], 00:10:48.312 "driver_specific": {} 00:10:48.312 } 00:10:48.312 ] 00:10:48.312 17:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:48.312 17:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:48.312 17:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:48.312 17:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:48.312 17:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:48.312 17:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:48.312 17:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:48.312 17:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:48.312 17:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:48.312 17:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:48.312 17:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:48.312 17:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:48.312 17:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.570 17:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:48.570 "name": "Existed_Raid", 00:10:48.570 "uuid": "d1cd1a6f-42cf-11ef-96ac-773515fba644", 00:10:48.570 "strip_size_kb": 64, 00:10:48.570 "state": "configuring", 00:10:48.570 "raid_level": "concat", 00:10:48.570 "superblock": true, 00:10:48.570 "num_base_bdevs": 3, 00:10:48.570 "num_base_bdevs_discovered": 2, 00:10:48.570 "num_base_bdevs_operational": 3, 00:10:48.570 "base_bdevs_list": [ 00:10:48.570 { 00:10:48.570 "name": "BaseBdev1", 00:10:48.570 "uuid": "d2eac189-42cf-11ef-96ac-773515fba644", 00:10:48.570 "is_configured": true, 00:10:48.570 "data_offset": 2048, 00:10:48.570 "data_size": 63488 00:10:48.570 }, 00:10:48.570 { 00:10:48.570 "name": null, 00:10:48.570 "uuid": "d0db6561-42cf-11ef-96ac-773515fba644", 00:10:48.570 "is_configured": false, 00:10:48.570 "data_offset": 2048, 00:10:48.570 "data_size": 63488 00:10:48.570 }, 00:10:48.570 { 00:10:48.570 "name": "BaseBdev3", 00:10:48.570 "uuid": "d15306f9-42cf-11ef-96ac-773515fba644", 00:10:48.570 "is_configured": true, 00:10:48.570 "data_offset": 2048, 00:10:48.570 "data_size": 63488 00:10:48.570 } 00:10:48.570 ] 00:10:48.570 }' 00:10:48.570 17:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:48.570 17:29:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.914 17:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:48.914 17:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:49.192 17:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:10:49.192 17:29:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:10:49.450 [2024-07-15 17:29:45.167898] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:49.450 17:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:49.450 17:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:49.450 17:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:49.450 17:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:49.450 17:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:49.450 17:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:49.450 17:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:49.450 17:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:49.450 17:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:49.450 17:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:49.450 17:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:49.450 17:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.707 17:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:49.707 "name": "Existed_Raid", 00:10:49.707 "uuid": "d1cd1a6f-42cf-11ef-96ac-773515fba644", 00:10:49.707 "strip_size_kb": 64, 00:10:49.707 "state": "configuring", 00:10:49.707 "raid_level": "concat", 00:10:49.707 "superblock": true, 00:10:49.707 "num_base_bdevs": 3, 00:10:49.707 "num_base_bdevs_discovered": 1, 00:10:49.707 "num_base_bdevs_operational": 3, 00:10:49.707 "base_bdevs_list": [ 00:10:49.707 { 00:10:49.707 "name": "BaseBdev1", 00:10:49.707 "uuid": "d2eac189-42cf-11ef-96ac-773515fba644", 00:10:49.707 "is_configured": true, 00:10:49.707 "data_offset": 2048, 00:10:49.707 "data_size": 63488 00:10:49.707 }, 00:10:49.707 { 00:10:49.707 "name": null, 00:10:49.707 "uuid": "d0db6561-42cf-11ef-96ac-773515fba644", 00:10:49.707 "is_configured": false, 00:10:49.707 "data_offset": 2048, 00:10:49.707 "data_size": 63488 00:10:49.707 }, 00:10:49.707 { 00:10:49.707 "name": null, 00:10:49.707 "uuid": "d15306f9-42cf-11ef-96ac-773515fba644", 00:10:49.707 "is_configured": false, 00:10:49.707 "data_offset": 2048, 00:10:49.707 "data_size": 63488 00:10:49.707 } 00:10:49.707 ] 00:10:49.707 }' 00:10:49.707 17:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:49.707 17:29:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.965 17:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:49.965 17:29:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:50.222 17:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:10:50.222 17:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:50.480 [2024-07-15 17:29:46.307940] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:50.738 17:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:50.738 17:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:50.739 17:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:50.739 17:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:50.739 17:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:50.739 17:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:50.739 17:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:50.739 17:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:50.739 17:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:50.739 17:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:50.739 17:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:50.739 17:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.739 17:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:50.739 "name": "Existed_Raid", 00:10:50.739 "uuid": "d1cd1a6f-42cf-11ef-96ac-773515fba644", 00:10:50.739 "strip_size_kb": 64, 00:10:50.739 "state": "configuring", 00:10:50.739 "raid_level": "concat", 00:10:50.739 "superblock": true, 00:10:50.739 "num_base_bdevs": 3, 00:10:50.739 "num_base_bdevs_discovered": 2, 00:10:50.739 "num_base_bdevs_operational": 3, 00:10:50.739 "base_bdevs_list": [ 00:10:50.739 { 00:10:50.739 "name": "BaseBdev1", 00:10:50.739 "uuid": "d2eac189-42cf-11ef-96ac-773515fba644", 00:10:50.739 "is_configured": true, 00:10:50.739 "data_offset": 2048, 00:10:50.739 "data_size": 63488 00:10:50.739 }, 00:10:50.739 { 00:10:50.739 "name": null, 00:10:50.739 "uuid": "d0db6561-42cf-11ef-96ac-773515fba644", 00:10:50.739 "is_configured": false, 00:10:50.739 "data_offset": 2048, 00:10:50.739 "data_size": 63488 00:10:50.739 }, 00:10:50.739 { 00:10:50.739 "name": "BaseBdev3", 00:10:50.739 "uuid": "d15306f9-42cf-11ef-96ac-773515fba644", 00:10:50.739 "is_configured": true, 00:10:50.739 "data_offset": 2048, 00:10:50.739 "data_size": 63488 00:10:50.739 } 00:10:50.739 ] 00:10:50.739 }' 00:10:50.739 17:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:50.739 17:29:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.304 17:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:51.304 17:29:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:51.562 17:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:10:51.562 17:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:51.563 [2024-07-15 17:29:47.387961] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:51.821 17:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:51.821 17:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:51.821 17:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:51.821 17:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:51.821 17:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:51.821 17:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:51.821 17:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:51.821 17:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:51.821 17:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:51.821 17:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:51.821 17:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:51.821 17:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.080 17:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:52.080 "name": "Existed_Raid", 00:10:52.080 "uuid": "d1cd1a6f-42cf-11ef-96ac-773515fba644", 00:10:52.080 "strip_size_kb": 64, 00:10:52.080 "state": "configuring", 00:10:52.080 "raid_level": "concat", 00:10:52.080 "superblock": true, 00:10:52.080 "num_base_bdevs": 3, 00:10:52.080 "num_base_bdevs_discovered": 1, 00:10:52.080 "num_base_bdevs_operational": 3, 00:10:52.080 "base_bdevs_list": [ 00:10:52.080 { 00:10:52.080 "name": null, 00:10:52.080 "uuid": "d2eac189-42cf-11ef-96ac-773515fba644", 00:10:52.080 "is_configured": false, 00:10:52.080 "data_offset": 2048, 00:10:52.080 "data_size": 63488 00:10:52.080 }, 00:10:52.080 { 00:10:52.080 "name": null, 00:10:52.080 "uuid": "d0db6561-42cf-11ef-96ac-773515fba644", 00:10:52.080 "is_configured": false, 00:10:52.080 "data_offset": 2048, 00:10:52.080 "data_size": 63488 00:10:52.080 }, 00:10:52.080 { 00:10:52.080 "name": "BaseBdev3", 00:10:52.080 "uuid": "d15306f9-42cf-11ef-96ac-773515fba644", 00:10:52.080 "is_configured": true, 00:10:52.080 "data_offset": 2048, 00:10:52.080 "data_size": 63488 00:10:52.080 } 00:10:52.080 ] 00:10:52.080 }' 00:10:52.080 17:29:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:52.080 17:29:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.338 17:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:52.338 17:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:52.596 17:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:10:52.596 17:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:52.855 [2024-07-15 17:29:48.566465] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:52.855 17:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:52.855 17:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:52.855 17:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:52.856 17:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:52.856 17:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:52.856 17:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:52.856 17:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:52.856 17:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:52.856 17:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:52.856 17:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:52.856 17:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:52.856 17:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.114 17:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:53.114 "name": "Existed_Raid", 00:10:53.114 "uuid": "d1cd1a6f-42cf-11ef-96ac-773515fba644", 00:10:53.114 "strip_size_kb": 64, 00:10:53.114 "state": "configuring", 00:10:53.114 "raid_level": "concat", 00:10:53.114 "superblock": true, 00:10:53.114 "num_base_bdevs": 3, 00:10:53.114 "num_base_bdevs_discovered": 2, 00:10:53.114 "num_base_bdevs_operational": 3, 00:10:53.114 "base_bdevs_list": [ 00:10:53.114 { 00:10:53.115 "name": null, 00:10:53.115 "uuid": "d2eac189-42cf-11ef-96ac-773515fba644", 00:10:53.115 "is_configured": false, 00:10:53.115 "data_offset": 2048, 00:10:53.115 "data_size": 63488 00:10:53.115 }, 00:10:53.115 { 00:10:53.115 "name": "BaseBdev2", 00:10:53.115 "uuid": "d0db6561-42cf-11ef-96ac-773515fba644", 00:10:53.115 "is_configured": true, 00:10:53.115 "data_offset": 2048, 00:10:53.115 "data_size": 63488 00:10:53.115 }, 00:10:53.115 { 00:10:53.115 "name": "BaseBdev3", 00:10:53.115 "uuid": "d15306f9-42cf-11ef-96ac-773515fba644", 00:10:53.115 "is_configured": true, 00:10:53.115 "data_offset": 2048, 00:10:53.115 "data_size": 63488 00:10:53.115 } 00:10:53.115 ] 00:10:53.115 }' 00:10:53.115 17:29:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:53.115 17:29:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.376 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:53.376 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:53.635 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:10:53.635 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:53.635 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:53.907 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u d2eac189-42cf-11ef-96ac-773515fba644 00:10:54.165 [2024-07-15 17:29:49.922621] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:54.165 [2024-07-15 17:29:49.922700] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x25c078e34a00 00:10:54.165 [2024-07-15 17:29:49.922705] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:54.165 [2024-07-15 17:29:49.922725] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25c078e97e20 00:10:54.165 [2024-07-15 17:29:49.922771] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x25c078e34a00 00:10:54.165 [2024-07-15 17:29:49.922775] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x25c078e34a00 00:10:54.165 [2024-07-15 17:29:49.922795] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:54.165 NewBaseBdev 00:10:54.165 17:29:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:10:54.165 17:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:10:54.165 17:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:54.165 17:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:54.165 17:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:54.165 17:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:54.165 17:29:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:54.423 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:54.681 [ 00:10:54.681 { 00:10:54.681 "name": "NewBaseBdev", 00:10:54.681 "aliases": [ 00:10:54.681 "d2eac189-42cf-11ef-96ac-773515fba644" 00:10:54.681 ], 00:10:54.681 "product_name": "Malloc disk", 00:10:54.681 "block_size": 512, 00:10:54.681 "num_blocks": 65536, 00:10:54.681 "uuid": "d2eac189-42cf-11ef-96ac-773515fba644", 00:10:54.681 "assigned_rate_limits": { 00:10:54.681 "rw_ios_per_sec": 0, 00:10:54.681 "rw_mbytes_per_sec": 0, 00:10:54.681 "r_mbytes_per_sec": 0, 00:10:54.681 "w_mbytes_per_sec": 0 00:10:54.681 }, 00:10:54.681 "claimed": true, 00:10:54.681 "claim_type": "exclusive_write", 00:10:54.681 "zoned": false, 00:10:54.681 "supported_io_types": { 00:10:54.681 "read": true, 00:10:54.681 "write": true, 00:10:54.681 "unmap": true, 00:10:54.681 "flush": true, 00:10:54.681 "reset": true, 00:10:54.681 "nvme_admin": false, 00:10:54.681 "nvme_io": false, 00:10:54.681 "nvme_io_md": false, 00:10:54.681 "write_zeroes": true, 00:10:54.681 "zcopy": true, 00:10:54.681 "get_zone_info": false, 00:10:54.681 "zone_management": false, 00:10:54.681 "zone_append": false, 00:10:54.681 "compare": false, 00:10:54.681 "compare_and_write": false, 00:10:54.681 "abort": true, 00:10:54.681 "seek_hole": false, 00:10:54.681 "seek_data": false, 00:10:54.681 "copy": true, 00:10:54.681 "nvme_iov_md": false 00:10:54.681 }, 00:10:54.681 "memory_domains": [ 00:10:54.681 { 00:10:54.681 "dma_device_id": "system", 00:10:54.681 "dma_device_type": 1 00:10:54.681 }, 00:10:54.681 { 00:10:54.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.681 "dma_device_type": 2 00:10:54.681 } 00:10:54.681 ], 00:10:54.681 "driver_specific": {} 00:10:54.681 } 00:10:54.681 ] 00:10:54.681 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:54.681 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:54.681 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:54.681 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:54.681 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:54.681 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:54.681 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:54.681 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:54.681 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:54.681 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:54.681 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:54.681 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:54.681 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.940 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:54.940 "name": "Existed_Raid", 00:10:54.940 "uuid": "d1cd1a6f-42cf-11ef-96ac-773515fba644", 00:10:54.940 "strip_size_kb": 64, 00:10:54.940 "state": "online", 00:10:54.940 "raid_level": "concat", 00:10:54.940 "superblock": true, 00:10:54.940 "num_base_bdevs": 3, 00:10:54.940 "num_base_bdevs_discovered": 3, 00:10:54.940 "num_base_bdevs_operational": 3, 00:10:54.940 "base_bdevs_list": [ 00:10:54.940 { 00:10:54.940 "name": "NewBaseBdev", 00:10:54.940 "uuid": "d2eac189-42cf-11ef-96ac-773515fba644", 00:10:54.940 "is_configured": true, 00:10:54.940 "data_offset": 2048, 00:10:54.940 "data_size": 63488 00:10:54.940 }, 00:10:54.940 { 00:10:54.940 "name": "BaseBdev2", 00:10:54.940 "uuid": "d0db6561-42cf-11ef-96ac-773515fba644", 00:10:54.940 "is_configured": true, 00:10:54.940 "data_offset": 2048, 00:10:54.940 "data_size": 63488 00:10:54.940 }, 00:10:54.940 { 00:10:54.940 "name": "BaseBdev3", 00:10:54.940 "uuid": "d15306f9-42cf-11ef-96ac-773515fba644", 00:10:54.940 "is_configured": true, 00:10:54.940 "data_offset": 2048, 00:10:54.940 "data_size": 63488 00:10:54.940 } 00:10:54.940 ] 00:10:54.940 }' 00:10:54.940 17:29:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:54.940 17:29:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.199 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:10:55.199 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:55.199 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:55.199 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:55.199 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:55.199 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:10:55.199 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:55.199 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:55.459 [2024-07-15 17:29:51.226572] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:55.459 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:55.459 "name": "Existed_Raid", 00:10:55.459 "aliases": [ 00:10:55.459 "d1cd1a6f-42cf-11ef-96ac-773515fba644" 00:10:55.459 ], 00:10:55.459 "product_name": "Raid Volume", 00:10:55.459 "block_size": 512, 00:10:55.459 "num_blocks": 190464, 00:10:55.459 "uuid": "d1cd1a6f-42cf-11ef-96ac-773515fba644", 00:10:55.459 "assigned_rate_limits": { 00:10:55.459 "rw_ios_per_sec": 0, 00:10:55.459 "rw_mbytes_per_sec": 0, 00:10:55.459 "r_mbytes_per_sec": 0, 00:10:55.459 "w_mbytes_per_sec": 0 00:10:55.459 }, 00:10:55.459 "claimed": false, 00:10:55.459 "zoned": false, 00:10:55.459 "supported_io_types": { 00:10:55.459 "read": true, 00:10:55.459 "write": true, 00:10:55.459 "unmap": true, 00:10:55.459 "flush": true, 00:10:55.459 "reset": true, 00:10:55.459 "nvme_admin": false, 00:10:55.459 "nvme_io": false, 00:10:55.459 "nvme_io_md": false, 00:10:55.459 "write_zeroes": true, 00:10:55.459 "zcopy": false, 00:10:55.459 "get_zone_info": false, 00:10:55.459 "zone_management": false, 00:10:55.459 "zone_append": false, 00:10:55.459 "compare": false, 00:10:55.459 "compare_and_write": false, 00:10:55.459 "abort": false, 00:10:55.459 "seek_hole": false, 00:10:55.459 "seek_data": false, 00:10:55.459 "copy": false, 00:10:55.459 "nvme_iov_md": false 00:10:55.459 }, 00:10:55.459 "memory_domains": [ 00:10:55.459 { 00:10:55.459 "dma_device_id": "system", 00:10:55.459 "dma_device_type": 1 00:10:55.459 }, 00:10:55.459 { 00:10:55.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.459 "dma_device_type": 2 00:10:55.459 }, 00:10:55.459 { 00:10:55.459 "dma_device_id": "system", 00:10:55.459 "dma_device_type": 1 00:10:55.459 }, 00:10:55.459 { 00:10:55.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.459 "dma_device_type": 2 00:10:55.459 }, 00:10:55.459 { 00:10:55.459 "dma_device_id": "system", 00:10:55.459 "dma_device_type": 1 00:10:55.459 }, 00:10:55.459 { 00:10:55.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.459 "dma_device_type": 2 00:10:55.459 } 00:10:55.459 ], 00:10:55.459 "driver_specific": { 00:10:55.459 "raid": { 00:10:55.459 "uuid": "d1cd1a6f-42cf-11ef-96ac-773515fba644", 00:10:55.459 "strip_size_kb": 64, 00:10:55.459 "state": "online", 00:10:55.459 "raid_level": "concat", 00:10:55.459 "superblock": true, 00:10:55.459 "num_base_bdevs": 3, 00:10:55.459 "num_base_bdevs_discovered": 3, 00:10:55.459 "num_base_bdevs_operational": 3, 00:10:55.459 "base_bdevs_list": [ 00:10:55.459 { 00:10:55.459 "name": "NewBaseBdev", 00:10:55.459 "uuid": "d2eac189-42cf-11ef-96ac-773515fba644", 00:10:55.459 "is_configured": true, 00:10:55.459 "data_offset": 2048, 00:10:55.459 "data_size": 63488 00:10:55.459 }, 00:10:55.459 { 00:10:55.459 "name": "BaseBdev2", 00:10:55.459 "uuid": "d0db6561-42cf-11ef-96ac-773515fba644", 00:10:55.459 "is_configured": true, 00:10:55.459 "data_offset": 2048, 00:10:55.459 "data_size": 63488 00:10:55.459 }, 00:10:55.459 { 00:10:55.459 "name": "BaseBdev3", 00:10:55.459 "uuid": "d15306f9-42cf-11ef-96ac-773515fba644", 00:10:55.459 "is_configured": true, 00:10:55.459 "data_offset": 2048, 00:10:55.459 "data_size": 63488 00:10:55.459 } 00:10:55.459 ] 00:10:55.459 } 00:10:55.459 } 00:10:55.459 }' 00:10:55.459 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:55.459 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:10:55.459 BaseBdev2 00:10:55.459 BaseBdev3' 00:10:55.459 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:55.459 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:55.459 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:10:55.718 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:55.718 "name": "NewBaseBdev", 00:10:55.718 "aliases": [ 00:10:55.718 "d2eac189-42cf-11ef-96ac-773515fba644" 00:10:55.718 ], 00:10:55.718 "product_name": "Malloc disk", 00:10:55.718 "block_size": 512, 00:10:55.718 "num_blocks": 65536, 00:10:55.718 "uuid": "d2eac189-42cf-11ef-96ac-773515fba644", 00:10:55.718 "assigned_rate_limits": { 00:10:55.718 "rw_ios_per_sec": 0, 00:10:55.718 "rw_mbytes_per_sec": 0, 00:10:55.718 "r_mbytes_per_sec": 0, 00:10:55.718 "w_mbytes_per_sec": 0 00:10:55.718 }, 00:10:55.718 "claimed": true, 00:10:55.718 "claim_type": "exclusive_write", 00:10:55.718 "zoned": false, 00:10:55.718 "supported_io_types": { 00:10:55.718 "read": true, 00:10:55.718 "write": true, 00:10:55.718 "unmap": true, 00:10:55.718 "flush": true, 00:10:55.718 "reset": true, 00:10:55.718 "nvme_admin": false, 00:10:55.718 "nvme_io": false, 00:10:55.718 "nvme_io_md": false, 00:10:55.718 "write_zeroes": true, 00:10:55.718 "zcopy": true, 00:10:55.718 "get_zone_info": false, 00:10:55.718 "zone_management": false, 00:10:55.718 "zone_append": false, 00:10:55.718 "compare": false, 00:10:55.718 "compare_and_write": false, 00:10:55.718 "abort": true, 00:10:55.718 "seek_hole": false, 00:10:55.718 "seek_data": false, 00:10:55.718 "copy": true, 00:10:55.718 "nvme_iov_md": false 00:10:55.718 }, 00:10:55.718 "memory_domains": [ 00:10:55.718 { 00:10:55.718 "dma_device_id": "system", 00:10:55.718 "dma_device_type": 1 00:10:55.718 }, 00:10:55.718 { 00:10:55.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.718 "dma_device_type": 2 00:10:55.718 } 00:10:55.718 ], 00:10:55.718 "driver_specific": {} 00:10:55.718 }' 00:10:55.979 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:55.979 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:55.979 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:55.979 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:55.979 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:55.979 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:55.979 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:55.979 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:55.979 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:55.979 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:55.979 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:55.979 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:55.979 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:55.979 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:55.979 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:56.237 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:56.237 "name": "BaseBdev2", 00:10:56.237 "aliases": [ 00:10:56.237 "d0db6561-42cf-11ef-96ac-773515fba644" 00:10:56.237 ], 00:10:56.237 "product_name": "Malloc disk", 00:10:56.237 "block_size": 512, 00:10:56.237 "num_blocks": 65536, 00:10:56.237 "uuid": "d0db6561-42cf-11ef-96ac-773515fba644", 00:10:56.237 "assigned_rate_limits": { 00:10:56.237 "rw_ios_per_sec": 0, 00:10:56.237 "rw_mbytes_per_sec": 0, 00:10:56.237 "r_mbytes_per_sec": 0, 00:10:56.237 "w_mbytes_per_sec": 0 00:10:56.237 }, 00:10:56.237 "claimed": true, 00:10:56.237 "claim_type": "exclusive_write", 00:10:56.237 "zoned": false, 00:10:56.237 "supported_io_types": { 00:10:56.237 "read": true, 00:10:56.237 "write": true, 00:10:56.237 "unmap": true, 00:10:56.237 "flush": true, 00:10:56.237 "reset": true, 00:10:56.237 "nvme_admin": false, 00:10:56.237 "nvme_io": false, 00:10:56.237 "nvme_io_md": false, 00:10:56.237 "write_zeroes": true, 00:10:56.237 "zcopy": true, 00:10:56.237 "get_zone_info": false, 00:10:56.237 "zone_management": false, 00:10:56.237 "zone_append": false, 00:10:56.237 "compare": false, 00:10:56.237 "compare_and_write": false, 00:10:56.237 "abort": true, 00:10:56.237 "seek_hole": false, 00:10:56.237 "seek_data": false, 00:10:56.237 "copy": true, 00:10:56.237 "nvme_iov_md": false 00:10:56.237 }, 00:10:56.237 "memory_domains": [ 00:10:56.237 { 00:10:56.237 "dma_device_id": "system", 00:10:56.237 "dma_device_type": 1 00:10:56.237 }, 00:10:56.237 { 00:10:56.238 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.238 "dma_device_type": 2 00:10:56.238 } 00:10:56.238 ], 00:10:56.238 "driver_specific": {} 00:10:56.238 }' 00:10:56.238 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:56.238 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:56.238 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:56.238 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:56.238 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:56.238 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:56.238 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:56.238 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:56.238 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:56.238 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:56.238 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:56.238 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:56.238 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:56.238 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:10:56.238 17:29:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:56.496 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:56.496 "name": "BaseBdev3", 00:10:56.496 "aliases": [ 00:10:56.496 "d15306f9-42cf-11ef-96ac-773515fba644" 00:10:56.496 ], 00:10:56.496 "product_name": "Malloc disk", 00:10:56.496 "block_size": 512, 00:10:56.496 "num_blocks": 65536, 00:10:56.496 "uuid": "d15306f9-42cf-11ef-96ac-773515fba644", 00:10:56.496 "assigned_rate_limits": { 00:10:56.496 "rw_ios_per_sec": 0, 00:10:56.496 "rw_mbytes_per_sec": 0, 00:10:56.496 "r_mbytes_per_sec": 0, 00:10:56.496 "w_mbytes_per_sec": 0 00:10:56.496 }, 00:10:56.496 "claimed": true, 00:10:56.496 "claim_type": "exclusive_write", 00:10:56.496 "zoned": false, 00:10:56.496 "supported_io_types": { 00:10:56.496 "read": true, 00:10:56.496 "write": true, 00:10:56.496 "unmap": true, 00:10:56.496 "flush": true, 00:10:56.496 "reset": true, 00:10:56.496 "nvme_admin": false, 00:10:56.496 "nvme_io": false, 00:10:56.496 "nvme_io_md": false, 00:10:56.496 "write_zeroes": true, 00:10:56.496 "zcopy": true, 00:10:56.496 "get_zone_info": false, 00:10:56.496 "zone_management": false, 00:10:56.496 "zone_append": false, 00:10:56.496 "compare": false, 00:10:56.496 "compare_and_write": false, 00:10:56.496 "abort": true, 00:10:56.496 "seek_hole": false, 00:10:56.496 "seek_data": false, 00:10:56.496 "copy": true, 00:10:56.496 "nvme_iov_md": false 00:10:56.496 }, 00:10:56.496 "memory_domains": [ 00:10:56.496 { 00:10:56.496 "dma_device_id": "system", 00:10:56.496 "dma_device_type": 1 00:10:56.496 }, 00:10:56.496 { 00:10:56.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:56.496 "dma_device_type": 2 00:10:56.496 } 00:10:56.496 ], 00:10:56.496 "driver_specific": {} 00:10:56.496 }' 00:10:56.496 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:56.496 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:56.496 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:56.496 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:56.496 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:56.496 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:56.496 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:56.496 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:56.496 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:56.496 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:56.496 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:56.496 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:56.496 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:56.755 [2024-07-15 17:29:52.506667] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:56.755 [2024-07-15 17:29:52.506702] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:56.755 [2024-07-15 17:29:52.506724] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:56.755 [2024-07-15 17:29:52.506737] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:56.755 [2024-07-15 17:29:52.506741] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25c078e34a00 name Existed_Raid, state offline 00:10:56.755 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 54749 00:10:56.755 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 54749 ']' 00:10:56.755 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 54749 00:10:56.755 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:10:56.755 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:10:56.755 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:10:56.755 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 54749 00:10:56.755 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:10:56.755 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:10:56.755 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 54749' 00:10:56.755 killing process with pid 54749 00:10:56.755 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 54749 00:10:56.755 [2024-07-15 17:29:52.534119] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:56.755 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 54749 00:10:56.755 [2024-07-15 17:29:52.551631] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:57.014 17:29:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:10:57.014 00:10:57.014 real 0m24.017s 00:10:57.014 user 0m43.874s 00:10:57.014 sys 0m3.321s 00:10:57.014 ************************************ 00:10:57.014 END TEST raid_state_function_test_sb 00:10:57.014 ************************************ 00:10:57.014 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:57.014 17:29:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.014 17:29:52 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:57.014 17:29:52 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:10:57.014 17:29:52 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:10:57.014 17:29:52 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:57.014 17:29:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:57.014 ************************************ 00:10:57.014 START TEST raid_superblock_test 00:10:57.014 ************************************ 00:10:57.014 17:29:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 3 00:10:57.014 17:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:10:57.014 17:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:10:57.014 17:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:10:57.014 17:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:10:57.014 17:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:10:57.014 17:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:10:57.014 17:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:10:57.014 17:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:10:57.014 17:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:10:57.014 17:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:10:57.014 17:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:10:57.014 17:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:10:57.014 17:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:10:57.014 17:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:10:57.014 17:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:10:57.014 17:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:10:57.014 17:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=55477 00:10:57.014 17:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 55477 /var/tmp/spdk-raid.sock 00:10:57.014 17:29:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 55477 ']' 00:10:57.014 17:29:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:57.014 17:29:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:10:57.014 17:29:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:57.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:57.014 17:29:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:57.014 17:29:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:57.014 17:29:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.014 [2024-07-15 17:29:52.786833] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:10:57.014 [2024-07-15 17:29:52.787016] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:10:57.690 EAL: TSC is not safe to use in SMP mode 00:10:57.690 EAL: TSC is not invariant 00:10:57.690 [2024-07-15 17:29:53.367117] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.690 [2024-07-15 17:29:53.457066] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:57.690 [2024-07-15 17:29:53.459200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.690 [2024-07-15 17:29:53.459957] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.690 [2024-07-15 17:29:53.459970] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:58.255 17:29:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:58.255 17:29:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:10:58.255 17:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:10:58.255 17:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:58.255 17:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:10:58.255 17:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:10:58.255 17:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:58.255 17:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:58.255 17:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:58.255 17:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:58.255 17:29:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:10:58.535 malloc1 00:10:58.535 17:29:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:58.793 [2024-07-15 17:29:54.416690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:58.793 [2024-07-15 17:29:54.416758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.793 [2024-07-15 17:29:54.416771] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1003b2434780 00:10:58.793 [2024-07-15 17:29:54.416780] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.793 [2024-07-15 17:29:54.417693] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.793 [2024-07-15 17:29:54.417721] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:58.793 pt1 00:10:58.793 17:29:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:58.793 17:29:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:58.793 17:29:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:10:58.793 17:29:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:10:58.793 17:29:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:58.793 17:29:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:58.794 17:29:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:58.794 17:29:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:58.794 17:29:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:10:59.052 malloc2 00:10:59.052 17:29:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:59.310 [2024-07-15 17:29:54.980697] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:59.310 [2024-07-15 17:29:54.980749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.310 [2024-07-15 17:29:54.980761] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1003b2434c80 00:10:59.310 [2024-07-15 17:29:54.980769] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.310 [2024-07-15 17:29:54.981419] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.310 [2024-07-15 17:29:54.981443] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:59.310 pt2 00:10:59.310 17:29:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:59.310 17:29:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:59.311 17:29:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:10:59.311 17:29:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:10:59.311 17:29:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:59.311 17:29:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:59.311 17:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:59.311 17:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:59.311 17:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:10:59.568 malloc3 00:10:59.568 17:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:59.826 [2024-07-15 17:29:55.464707] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:59.826 [2024-07-15 17:29:55.464763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.826 [2024-07-15 17:29:55.464776] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1003b2435180 00:10:59.826 [2024-07-15 17:29:55.464784] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.826 [2024-07-15 17:29:55.465439] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.826 [2024-07-15 17:29:55.465463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:59.826 pt3 00:10:59.826 17:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:59.826 17:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:59.826 17:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:11:00.084 [2024-07-15 17:29:55.700713] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:00.084 [2024-07-15 17:29:55.701301] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:00.084 [2024-07-15 17:29:55.701330] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:00.084 [2024-07-15 17:29:55.701387] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1003b2435400 00:11:00.084 [2024-07-15 17:29:55.701393] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:00.084 [2024-07-15 17:29:55.701426] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1003b2497e20 00:11:00.084 [2024-07-15 17:29:55.701511] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1003b2435400 00:11:00.084 [2024-07-15 17:29:55.701516] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1003b2435400 00:11:00.084 [2024-07-15 17:29:55.701543] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.084 17:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:00.084 17:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:00.084 17:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:00.084 17:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:00.084 17:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:00.084 17:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:00.084 17:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:00.084 17:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:00.084 17:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:00.084 17:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:00.084 17:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:00.084 17:29:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.342 17:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:00.342 "name": "raid_bdev1", 00:11:00.342 "uuid": "da2096b6-42cf-11ef-96ac-773515fba644", 00:11:00.342 "strip_size_kb": 64, 00:11:00.342 "state": "online", 00:11:00.342 "raid_level": "concat", 00:11:00.342 "superblock": true, 00:11:00.342 "num_base_bdevs": 3, 00:11:00.342 "num_base_bdevs_discovered": 3, 00:11:00.342 "num_base_bdevs_operational": 3, 00:11:00.342 "base_bdevs_list": [ 00:11:00.342 { 00:11:00.342 "name": "pt1", 00:11:00.342 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:00.342 "is_configured": true, 00:11:00.342 "data_offset": 2048, 00:11:00.342 "data_size": 63488 00:11:00.342 }, 00:11:00.342 { 00:11:00.342 "name": "pt2", 00:11:00.342 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:00.342 "is_configured": true, 00:11:00.342 "data_offset": 2048, 00:11:00.342 "data_size": 63488 00:11:00.342 }, 00:11:00.342 { 00:11:00.342 "name": "pt3", 00:11:00.342 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:00.342 "is_configured": true, 00:11:00.342 "data_offset": 2048, 00:11:00.342 "data_size": 63488 00:11:00.342 } 00:11:00.342 ] 00:11:00.342 }' 00:11:00.342 17:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:00.342 17:29:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.599 17:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:11:00.599 17:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:11:00.599 17:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:00.599 17:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:00.599 17:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:00.599 17:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:00.599 17:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:00.599 17:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:00.858 [2024-07-15 17:29:56.660767] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:00.858 17:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:00.858 "name": "raid_bdev1", 00:11:00.858 "aliases": [ 00:11:00.858 "da2096b6-42cf-11ef-96ac-773515fba644" 00:11:00.858 ], 00:11:00.858 "product_name": "Raid Volume", 00:11:00.858 "block_size": 512, 00:11:00.858 "num_blocks": 190464, 00:11:00.858 "uuid": "da2096b6-42cf-11ef-96ac-773515fba644", 00:11:00.858 "assigned_rate_limits": { 00:11:00.858 "rw_ios_per_sec": 0, 00:11:00.858 "rw_mbytes_per_sec": 0, 00:11:00.858 "r_mbytes_per_sec": 0, 00:11:00.858 "w_mbytes_per_sec": 0 00:11:00.858 }, 00:11:00.858 "claimed": false, 00:11:00.858 "zoned": false, 00:11:00.858 "supported_io_types": { 00:11:00.858 "read": true, 00:11:00.858 "write": true, 00:11:00.859 "unmap": true, 00:11:00.859 "flush": true, 00:11:00.859 "reset": true, 00:11:00.859 "nvme_admin": false, 00:11:00.859 "nvme_io": false, 00:11:00.859 "nvme_io_md": false, 00:11:00.859 "write_zeroes": true, 00:11:00.859 "zcopy": false, 00:11:00.859 "get_zone_info": false, 00:11:00.859 "zone_management": false, 00:11:00.859 "zone_append": false, 00:11:00.859 "compare": false, 00:11:00.859 "compare_and_write": false, 00:11:00.859 "abort": false, 00:11:00.859 "seek_hole": false, 00:11:00.859 "seek_data": false, 00:11:00.859 "copy": false, 00:11:00.859 "nvme_iov_md": false 00:11:00.859 }, 00:11:00.859 "memory_domains": [ 00:11:00.859 { 00:11:00.859 "dma_device_id": "system", 00:11:00.859 "dma_device_type": 1 00:11:00.859 }, 00:11:00.859 { 00:11:00.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.859 "dma_device_type": 2 00:11:00.859 }, 00:11:00.859 { 00:11:00.859 "dma_device_id": "system", 00:11:00.859 "dma_device_type": 1 00:11:00.859 }, 00:11:00.859 { 00:11:00.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.859 "dma_device_type": 2 00:11:00.859 }, 00:11:00.859 { 00:11:00.859 "dma_device_id": "system", 00:11:00.859 "dma_device_type": 1 00:11:00.859 }, 00:11:00.859 { 00:11:00.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.859 "dma_device_type": 2 00:11:00.859 } 00:11:00.859 ], 00:11:00.859 "driver_specific": { 00:11:00.859 "raid": { 00:11:00.859 "uuid": "da2096b6-42cf-11ef-96ac-773515fba644", 00:11:00.859 "strip_size_kb": 64, 00:11:00.859 "state": "online", 00:11:00.859 "raid_level": "concat", 00:11:00.859 "superblock": true, 00:11:00.859 "num_base_bdevs": 3, 00:11:00.859 "num_base_bdevs_discovered": 3, 00:11:00.859 "num_base_bdevs_operational": 3, 00:11:00.859 "base_bdevs_list": [ 00:11:00.859 { 00:11:00.859 "name": "pt1", 00:11:00.859 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:00.859 "is_configured": true, 00:11:00.859 "data_offset": 2048, 00:11:00.859 "data_size": 63488 00:11:00.859 }, 00:11:00.859 { 00:11:00.859 "name": "pt2", 00:11:00.859 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:00.859 "is_configured": true, 00:11:00.859 "data_offset": 2048, 00:11:00.859 "data_size": 63488 00:11:00.859 }, 00:11:00.859 { 00:11:00.859 "name": "pt3", 00:11:00.859 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:00.859 "is_configured": true, 00:11:00.859 "data_offset": 2048, 00:11:00.859 "data_size": 63488 00:11:00.859 } 00:11:00.859 ] 00:11:00.859 } 00:11:00.859 } 00:11:00.859 }' 00:11:00.859 17:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:00.859 17:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:11:00.859 pt2 00:11:00.859 pt3' 00:11:00.859 17:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:00.859 17:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:00.859 17:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:01.429 17:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:01.429 "name": "pt1", 00:11:01.429 "aliases": [ 00:11:01.429 "00000000-0000-0000-0000-000000000001" 00:11:01.429 ], 00:11:01.429 "product_name": "passthru", 00:11:01.429 "block_size": 512, 00:11:01.429 "num_blocks": 65536, 00:11:01.429 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:01.429 "assigned_rate_limits": { 00:11:01.429 "rw_ios_per_sec": 0, 00:11:01.429 "rw_mbytes_per_sec": 0, 00:11:01.429 "r_mbytes_per_sec": 0, 00:11:01.429 "w_mbytes_per_sec": 0 00:11:01.429 }, 00:11:01.429 "claimed": true, 00:11:01.429 "claim_type": "exclusive_write", 00:11:01.429 "zoned": false, 00:11:01.429 "supported_io_types": { 00:11:01.429 "read": true, 00:11:01.429 "write": true, 00:11:01.429 "unmap": true, 00:11:01.429 "flush": true, 00:11:01.429 "reset": true, 00:11:01.429 "nvme_admin": false, 00:11:01.429 "nvme_io": false, 00:11:01.429 "nvme_io_md": false, 00:11:01.429 "write_zeroes": true, 00:11:01.429 "zcopy": true, 00:11:01.429 "get_zone_info": false, 00:11:01.429 "zone_management": false, 00:11:01.429 "zone_append": false, 00:11:01.429 "compare": false, 00:11:01.429 "compare_and_write": false, 00:11:01.429 "abort": true, 00:11:01.429 "seek_hole": false, 00:11:01.429 "seek_data": false, 00:11:01.429 "copy": true, 00:11:01.429 "nvme_iov_md": false 00:11:01.429 }, 00:11:01.429 "memory_domains": [ 00:11:01.429 { 00:11:01.429 "dma_device_id": "system", 00:11:01.429 "dma_device_type": 1 00:11:01.429 }, 00:11:01.429 { 00:11:01.430 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.430 "dma_device_type": 2 00:11:01.430 } 00:11:01.430 ], 00:11:01.430 "driver_specific": { 00:11:01.430 "passthru": { 00:11:01.430 "name": "pt1", 00:11:01.430 "base_bdev_name": "malloc1" 00:11:01.430 } 00:11:01.430 } 00:11:01.430 }' 00:11:01.430 17:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:01.430 17:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:01.430 17:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:01.430 17:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:01.430 17:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:01.430 17:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:01.430 17:29:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:01.430 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:01.430 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:01.430 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:01.430 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:01.430 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:01.430 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:01.430 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:01.430 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:01.688 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:01.688 "name": "pt2", 00:11:01.688 "aliases": [ 00:11:01.688 "00000000-0000-0000-0000-000000000002" 00:11:01.688 ], 00:11:01.688 "product_name": "passthru", 00:11:01.688 "block_size": 512, 00:11:01.688 "num_blocks": 65536, 00:11:01.688 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:01.688 "assigned_rate_limits": { 00:11:01.688 "rw_ios_per_sec": 0, 00:11:01.688 "rw_mbytes_per_sec": 0, 00:11:01.688 "r_mbytes_per_sec": 0, 00:11:01.688 "w_mbytes_per_sec": 0 00:11:01.688 }, 00:11:01.688 "claimed": true, 00:11:01.688 "claim_type": "exclusive_write", 00:11:01.688 "zoned": false, 00:11:01.688 "supported_io_types": { 00:11:01.688 "read": true, 00:11:01.688 "write": true, 00:11:01.688 "unmap": true, 00:11:01.688 "flush": true, 00:11:01.688 "reset": true, 00:11:01.688 "nvme_admin": false, 00:11:01.688 "nvme_io": false, 00:11:01.688 "nvme_io_md": false, 00:11:01.688 "write_zeroes": true, 00:11:01.688 "zcopy": true, 00:11:01.688 "get_zone_info": false, 00:11:01.688 "zone_management": false, 00:11:01.688 "zone_append": false, 00:11:01.688 "compare": false, 00:11:01.688 "compare_and_write": false, 00:11:01.688 "abort": true, 00:11:01.688 "seek_hole": false, 00:11:01.688 "seek_data": false, 00:11:01.688 "copy": true, 00:11:01.688 "nvme_iov_md": false 00:11:01.688 }, 00:11:01.688 "memory_domains": [ 00:11:01.688 { 00:11:01.688 "dma_device_id": "system", 00:11:01.688 "dma_device_type": 1 00:11:01.688 }, 00:11:01.688 { 00:11:01.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.688 "dma_device_type": 2 00:11:01.688 } 00:11:01.688 ], 00:11:01.688 "driver_specific": { 00:11:01.688 "passthru": { 00:11:01.688 "name": "pt2", 00:11:01.688 "base_bdev_name": "malloc2" 00:11:01.688 } 00:11:01.688 } 00:11:01.688 }' 00:11:01.688 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:01.688 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:01.688 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:01.688 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:01.688 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:01.688 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:01.688 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:01.688 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:01.688 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:01.688 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:01.688 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:01.688 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:01.688 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:01.688 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:11:01.688 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:01.946 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:01.946 "name": "pt3", 00:11:01.946 "aliases": [ 00:11:01.946 "00000000-0000-0000-0000-000000000003" 00:11:01.946 ], 00:11:01.946 "product_name": "passthru", 00:11:01.946 "block_size": 512, 00:11:01.946 "num_blocks": 65536, 00:11:01.946 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:01.946 "assigned_rate_limits": { 00:11:01.946 "rw_ios_per_sec": 0, 00:11:01.946 "rw_mbytes_per_sec": 0, 00:11:01.946 "r_mbytes_per_sec": 0, 00:11:01.946 "w_mbytes_per_sec": 0 00:11:01.946 }, 00:11:01.946 "claimed": true, 00:11:01.946 "claim_type": "exclusive_write", 00:11:01.946 "zoned": false, 00:11:01.946 "supported_io_types": { 00:11:01.946 "read": true, 00:11:01.946 "write": true, 00:11:01.946 "unmap": true, 00:11:01.946 "flush": true, 00:11:01.946 "reset": true, 00:11:01.946 "nvme_admin": false, 00:11:01.946 "nvme_io": false, 00:11:01.946 "nvme_io_md": false, 00:11:01.946 "write_zeroes": true, 00:11:01.946 "zcopy": true, 00:11:01.946 "get_zone_info": false, 00:11:01.946 "zone_management": false, 00:11:01.946 "zone_append": false, 00:11:01.946 "compare": false, 00:11:01.946 "compare_and_write": false, 00:11:01.946 "abort": true, 00:11:01.946 "seek_hole": false, 00:11:01.946 "seek_data": false, 00:11:01.946 "copy": true, 00:11:01.946 "nvme_iov_md": false 00:11:01.946 }, 00:11:01.946 "memory_domains": [ 00:11:01.946 { 00:11:01.946 "dma_device_id": "system", 00:11:01.946 "dma_device_type": 1 00:11:01.946 }, 00:11:01.946 { 00:11:01.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.946 "dma_device_type": 2 00:11:01.946 } 00:11:01.946 ], 00:11:01.946 "driver_specific": { 00:11:01.946 "passthru": { 00:11:01.946 "name": "pt3", 00:11:01.946 "base_bdev_name": "malloc3" 00:11:01.946 } 00:11:01.946 } 00:11:01.946 }' 00:11:01.946 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:01.946 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:01.946 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:01.946 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:01.946 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:01.946 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:01.946 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:01.946 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:01.946 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:01.946 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:01.946 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:01.946 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:01.946 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:01.946 17:29:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:11:02.204 [2024-07-15 17:29:58.012782] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:02.204 17:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=da2096b6-42cf-11ef-96ac-773515fba644 00:11:02.204 17:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z da2096b6-42cf-11ef-96ac-773515fba644 ']' 00:11:02.204 17:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:02.766 [2024-07-15 17:29:58.292741] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:02.766 [2024-07-15 17:29:58.292763] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:02.766 [2024-07-15 17:29:58.292785] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:02.766 [2024-07-15 17:29:58.292800] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:02.766 [2024-07-15 17:29:58.292804] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1003b2435400 name raid_bdev1, state offline 00:11:02.766 17:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:11:02.766 17:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:02.766 17:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:11:02.766 17:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:11:02.766 17:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:11:02.766 17:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:11:03.022 17:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:11:03.022 17:29:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:03.279 17:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:11:03.279 17:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:11:03.536 17:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:11:03.536 17:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:03.794 17:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:11:03.794 17:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:11:03.794 17:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:11:03.794 17:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:11:03.794 17:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:03.794 17:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:03.794 17:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:03.794 17:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:03.794 17:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:03.794 17:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:03.794 17:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:03.794 17:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:03.794 17:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:11:04.052 [2024-07-15 17:29:59.836784] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:04.052 [2024-07-15 17:29:59.837369] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:04.052 [2024-07-15 17:29:59.837388] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:04.052 [2024-07-15 17:29:59.837402] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:04.052 [2024-07-15 17:29:59.837445] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:04.052 [2024-07-15 17:29:59.837457] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:04.052 [2024-07-15 17:29:59.837465] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:04.052 [2024-07-15 17:29:59.837470] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1003b2435180 name raid_bdev1, state configuring 00:11:04.052 request: 00:11:04.052 { 00:11:04.052 "name": "raid_bdev1", 00:11:04.052 "raid_level": "concat", 00:11:04.052 "base_bdevs": [ 00:11:04.052 "malloc1", 00:11:04.052 "malloc2", 00:11:04.052 "malloc3" 00:11:04.052 ], 00:11:04.052 "strip_size_kb": 64, 00:11:04.052 "superblock": false, 00:11:04.052 "method": "bdev_raid_create", 00:11:04.052 "req_id": 1 00:11:04.052 } 00:11:04.052 Got JSON-RPC error response 00:11:04.052 response: 00:11:04.052 { 00:11:04.052 "code": -17, 00:11:04.052 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:04.052 } 00:11:04.052 17:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:11:04.052 17:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:04.052 17:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:04.052 17:29:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:04.052 17:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:11:04.052 17:29:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:04.311 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:11:04.311 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:11:04.311 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:04.569 [2024-07-15 17:30:00.336781] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:04.569 [2024-07-15 17:30:00.336835] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.569 [2024-07-15 17:30:00.336848] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1003b2434c80 00:11:04.569 [2024-07-15 17:30:00.336866] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.569 [2024-07-15 17:30:00.337517] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.569 [2024-07-15 17:30:00.337549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:04.569 [2024-07-15 17:30:00.337574] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:04.569 [2024-07-15 17:30:00.337586] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:04.569 pt1 00:11:04.569 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:04.569 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:04.569 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:04.569 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:04.569 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:04.569 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:04.569 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:04.569 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:04.569 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:04.569 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:04.569 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:04.569 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.853 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:04.853 "name": "raid_bdev1", 00:11:04.853 "uuid": "da2096b6-42cf-11ef-96ac-773515fba644", 00:11:04.853 "strip_size_kb": 64, 00:11:04.853 "state": "configuring", 00:11:04.853 "raid_level": "concat", 00:11:04.853 "superblock": true, 00:11:04.853 "num_base_bdevs": 3, 00:11:04.853 "num_base_bdevs_discovered": 1, 00:11:04.853 "num_base_bdevs_operational": 3, 00:11:04.853 "base_bdevs_list": [ 00:11:04.853 { 00:11:04.853 "name": "pt1", 00:11:04.853 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:04.853 "is_configured": true, 00:11:04.853 "data_offset": 2048, 00:11:04.853 "data_size": 63488 00:11:04.853 }, 00:11:04.853 { 00:11:04.853 "name": null, 00:11:04.853 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:04.853 "is_configured": false, 00:11:04.853 "data_offset": 2048, 00:11:04.853 "data_size": 63488 00:11:04.853 }, 00:11:04.853 { 00:11:04.853 "name": null, 00:11:04.853 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:04.853 "is_configured": false, 00:11:04.853 "data_offset": 2048, 00:11:04.853 "data_size": 63488 00:11:04.853 } 00:11:04.853 ] 00:11:04.853 }' 00:11:04.853 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:04.853 17:30:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.112 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:11:05.112 17:30:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:05.373 [2024-07-15 17:30:01.180796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:05.373 [2024-07-15 17:30:01.180869] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:05.373 [2024-07-15 17:30:01.180881] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1003b2435680 00:11:05.373 [2024-07-15 17:30:01.180889] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:05.373 [2024-07-15 17:30:01.181036] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:05.373 [2024-07-15 17:30:01.181067] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:05.373 [2024-07-15 17:30:01.181107] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:05.373 [2024-07-15 17:30:01.181116] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:05.373 pt2 00:11:05.373 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:05.630 [2024-07-15 17:30:01.452822] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:05.888 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:05.888 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:05.888 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:05.888 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:05.888 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:05.888 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:05.888 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:05.888 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:05.888 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:05.888 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:05.888 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:05.888 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:06.145 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:06.145 "name": "raid_bdev1", 00:11:06.145 "uuid": "da2096b6-42cf-11ef-96ac-773515fba644", 00:11:06.145 "strip_size_kb": 64, 00:11:06.145 "state": "configuring", 00:11:06.145 "raid_level": "concat", 00:11:06.145 "superblock": true, 00:11:06.145 "num_base_bdevs": 3, 00:11:06.145 "num_base_bdevs_discovered": 1, 00:11:06.145 "num_base_bdevs_operational": 3, 00:11:06.145 "base_bdevs_list": [ 00:11:06.145 { 00:11:06.145 "name": "pt1", 00:11:06.145 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:06.145 "is_configured": true, 00:11:06.145 "data_offset": 2048, 00:11:06.145 "data_size": 63488 00:11:06.145 }, 00:11:06.145 { 00:11:06.145 "name": null, 00:11:06.145 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:06.145 "is_configured": false, 00:11:06.145 "data_offset": 2048, 00:11:06.145 "data_size": 63488 00:11:06.145 }, 00:11:06.145 { 00:11:06.145 "name": null, 00:11:06.145 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:06.145 "is_configured": false, 00:11:06.145 "data_offset": 2048, 00:11:06.145 "data_size": 63488 00:11:06.145 } 00:11:06.145 ] 00:11:06.145 }' 00:11:06.145 17:30:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:06.145 17:30:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.403 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:11:06.403 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:11:06.403 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:06.661 [2024-07-15 17:30:02.368830] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:06.661 [2024-07-15 17:30:02.368892] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.661 [2024-07-15 17:30:02.368904] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1003b2435680 00:11:06.661 [2024-07-15 17:30:02.368912] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.661 [2024-07-15 17:30:02.369023] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.661 [2024-07-15 17:30:02.369035] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:06.661 [2024-07-15 17:30:02.369058] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:06.661 [2024-07-15 17:30:02.369066] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:06.661 pt2 00:11:06.661 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:11:06.661 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:11:06.661 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:06.918 [2024-07-15 17:30:02.648836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:06.918 [2024-07-15 17:30:02.648887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:06.918 [2024-07-15 17:30:02.648899] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1003b2435400 00:11:06.918 [2024-07-15 17:30:02.648907] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:06.918 [2024-07-15 17:30:02.649019] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:06.918 [2024-07-15 17:30:02.649030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:06.918 [2024-07-15 17:30:02.649053] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:06.918 [2024-07-15 17:30:02.649061] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:06.918 [2024-07-15 17:30:02.649098] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1003b2434780 00:11:06.918 [2024-07-15 17:30:02.649103] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:06.918 [2024-07-15 17:30:02.649124] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1003b2497e20 00:11:06.918 [2024-07-15 17:30:02.649186] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1003b2434780 00:11:06.918 [2024-07-15 17:30:02.649190] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1003b2434780 00:11:06.918 [2024-07-15 17:30:02.649212] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.918 pt3 00:11:06.918 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:11:06.918 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:11:06.918 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:06.918 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:06.918 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:06.918 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:06.918 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:06.918 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:06.918 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:06.918 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:06.918 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:06.918 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:06.918 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:06.918 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:07.175 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:07.175 "name": "raid_bdev1", 00:11:07.175 "uuid": "da2096b6-42cf-11ef-96ac-773515fba644", 00:11:07.175 "strip_size_kb": 64, 00:11:07.175 "state": "online", 00:11:07.175 "raid_level": "concat", 00:11:07.175 "superblock": true, 00:11:07.175 "num_base_bdevs": 3, 00:11:07.175 "num_base_bdevs_discovered": 3, 00:11:07.175 "num_base_bdevs_operational": 3, 00:11:07.175 "base_bdevs_list": [ 00:11:07.175 { 00:11:07.175 "name": "pt1", 00:11:07.175 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:07.175 "is_configured": true, 00:11:07.175 "data_offset": 2048, 00:11:07.175 "data_size": 63488 00:11:07.175 }, 00:11:07.175 { 00:11:07.175 "name": "pt2", 00:11:07.175 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:07.175 "is_configured": true, 00:11:07.175 "data_offset": 2048, 00:11:07.175 "data_size": 63488 00:11:07.175 }, 00:11:07.175 { 00:11:07.175 "name": "pt3", 00:11:07.175 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:07.175 "is_configured": true, 00:11:07.175 "data_offset": 2048, 00:11:07.175 "data_size": 63488 00:11:07.175 } 00:11:07.175 ] 00:11:07.175 }' 00:11:07.175 17:30:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:07.175 17:30:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.432 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:11:07.432 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:11:07.432 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:07.432 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:07.432 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:07.432 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:07.432 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:07.432 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:07.689 [2024-07-15 17:30:03.504896] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.946 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:07.946 "name": "raid_bdev1", 00:11:07.946 "aliases": [ 00:11:07.946 "da2096b6-42cf-11ef-96ac-773515fba644" 00:11:07.946 ], 00:11:07.946 "product_name": "Raid Volume", 00:11:07.946 "block_size": 512, 00:11:07.946 "num_blocks": 190464, 00:11:07.946 "uuid": "da2096b6-42cf-11ef-96ac-773515fba644", 00:11:07.946 "assigned_rate_limits": { 00:11:07.946 "rw_ios_per_sec": 0, 00:11:07.946 "rw_mbytes_per_sec": 0, 00:11:07.946 "r_mbytes_per_sec": 0, 00:11:07.946 "w_mbytes_per_sec": 0 00:11:07.946 }, 00:11:07.946 "claimed": false, 00:11:07.946 "zoned": false, 00:11:07.946 "supported_io_types": { 00:11:07.946 "read": true, 00:11:07.946 "write": true, 00:11:07.946 "unmap": true, 00:11:07.946 "flush": true, 00:11:07.946 "reset": true, 00:11:07.946 "nvme_admin": false, 00:11:07.946 "nvme_io": false, 00:11:07.946 "nvme_io_md": false, 00:11:07.946 "write_zeroes": true, 00:11:07.946 "zcopy": false, 00:11:07.946 "get_zone_info": false, 00:11:07.946 "zone_management": false, 00:11:07.946 "zone_append": false, 00:11:07.946 "compare": false, 00:11:07.946 "compare_and_write": false, 00:11:07.946 "abort": false, 00:11:07.946 "seek_hole": false, 00:11:07.946 "seek_data": false, 00:11:07.946 "copy": false, 00:11:07.946 "nvme_iov_md": false 00:11:07.946 }, 00:11:07.946 "memory_domains": [ 00:11:07.946 { 00:11:07.946 "dma_device_id": "system", 00:11:07.946 "dma_device_type": 1 00:11:07.946 }, 00:11:07.946 { 00:11:07.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.946 "dma_device_type": 2 00:11:07.946 }, 00:11:07.946 { 00:11:07.946 "dma_device_id": "system", 00:11:07.946 "dma_device_type": 1 00:11:07.946 }, 00:11:07.946 { 00:11:07.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.946 "dma_device_type": 2 00:11:07.946 }, 00:11:07.946 { 00:11:07.946 "dma_device_id": "system", 00:11:07.946 "dma_device_type": 1 00:11:07.946 }, 00:11:07.946 { 00:11:07.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.946 "dma_device_type": 2 00:11:07.946 } 00:11:07.946 ], 00:11:07.946 "driver_specific": { 00:11:07.946 "raid": { 00:11:07.946 "uuid": "da2096b6-42cf-11ef-96ac-773515fba644", 00:11:07.946 "strip_size_kb": 64, 00:11:07.946 "state": "online", 00:11:07.946 "raid_level": "concat", 00:11:07.946 "superblock": true, 00:11:07.946 "num_base_bdevs": 3, 00:11:07.946 "num_base_bdevs_discovered": 3, 00:11:07.946 "num_base_bdevs_operational": 3, 00:11:07.946 "base_bdevs_list": [ 00:11:07.946 { 00:11:07.946 "name": "pt1", 00:11:07.946 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:07.946 "is_configured": true, 00:11:07.946 "data_offset": 2048, 00:11:07.946 "data_size": 63488 00:11:07.946 }, 00:11:07.946 { 00:11:07.946 "name": "pt2", 00:11:07.946 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:07.946 "is_configured": true, 00:11:07.946 "data_offset": 2048, 00:11:07.946 "data_size": 63488 00:11:07.946 }, 00:11:07.946 { 00:11:07.946 "name": "pt3", 00:11:07.946 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:07.946 "is_configured": true, 00:11:07.946 "data_offset": 2048, 00:11:07.946 "data_size": 63488 00:11:07.946 } 00:11:07.946 ] 00:11:07.946 } 00:11:07.946 } 00:11:07.946 }' 00:11:07.946 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:07.946 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:11:07.946 pt2 00:11:07.946 pt3' 00:11:07.946 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:07.946 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:07.946 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:08.203 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:08.203 "name": "pt1", 00:11:08.203 "aliases": [ 00:11:08.203 "00000000-0000-0000-0000-000000000001" 00:11:08.203 ], 00:11:08.203 "product_name": "passthru", 00:11:08.203 "block_size": 512, 00:11:08.203 "num_blocks": 65536, 00:11:08.203 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:08.203 "assigned_rate_limits": { 00:11:08.203 "rw_ios_per_sec": 0, 00:11:08.203 "rw_mbytes_per_sec": 0, 00:11:08.203 "r_mbytes_per_sec": 0, 00:11:08.203 "w_mbytes_per_sec": 0 00:11:08.203 }, 00:11:08.203 "claimed": true, 00:11:08.203 "claim_type": "exclusive_write", 00:11:08.203 "zoned": false, 00:11:08.203 "supported_io_types": { 00:11:08.203 "read": true, 00:11:08.203 "write": true, 00:11:08.203 "unmap": true, 00:11:08.203 "flush": true, 00:11:08.203 "reset": true, 00:11:08.203 "nvme_admin": false, 00:11:08.203 "nvme_io": false, 00:11:08.203 "nvme_io_md": false, 00:11:08.203 "write_zeroes": true, 00:11:08.203 "zcopy": true, 00:11:08.203 "get_zone_info": false, 00:11:08.203 "zone_management": false, 00:11:08.203 "zone_append": false, 00:11:08.203 "compare": false, 00:11:08.203 "compare_and_write": false, 00:11:08.203 "abort": true, 00:11:08.203 "seek_hole": false, 00:11:08.203 "seek_data": false, 00:11:08.203 "copy": true, 00:11:08.203 "nvme_iov_md": false 00:11:08.203 }, 00:11:08.203 "memory_domains": [ 00:11:08.203 { 00:11:08.203 "dma_device_id": "system", 00:11:08.203 "dma_device_type": 1 00:11:08.203 }, 00:11:08.203 { 00:11:08.203 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.203 "dma_device_type": 2 00:11:08.203 } 00:11:08.203 ], 00:11:08.203 "driver_specific": { 00:11:08.203 "passthru": { 00:11:08.203 "name": "pt1", 00:11:08.203 "base_bdev_name": "malloc1" 00:11:08.203 } 00:11:08.203 } 00:11:08.203 }' 00:11:08.203 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:08.203 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:08.203 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:08.203 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:08.203 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:08.203 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:08.203 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:08.203 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:08.203 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:08.203 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:08.203 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:08.203 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:08.203 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:08.203 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:08.203 17:30:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:08.460 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:08.460 "name": "pt2", 00:11:08.460 "aliases": [ 00:11:08.460 "00000000-0000-0000-0000-000000000002" 00:11:08.460 ], 00:11:08.460 "product_name": "passthru", 00:11:08.460 "block_size": 512, 00:11:08.460 "num_blocks": 65536, 00:11:08.460 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:08.460 "assigned_rate_limits": { 00:11:08.460 "rw_ios_per_sec": 0, 00:11:08.460 "rw_mbytes_per_sec": 0, 00:11:08.460 "r_mbytes_per_sec": 0, 00:11:08.460 "w_mbytes_per_sec": 0 00:11:08.460 }, 00:11:08.460 "claimed": true, 00:11:08.460 "claim_type": "exclusive_write", 00:11:08.460 "zoned": false, 00:11:08.460 "supported_io_types": { 00:11:08.460 "read": true, 00:11:08.460 "write": true, 00:11:08.460 "unmap": true, 00:11:08.460 "flush": true, 00:11:08.460 "reset": true, 00:11:08.460 "nvme_admin": false, 00:11:08.460 "nvme_io": false, 00:11:08.460 "nvme_io_md": false, 00:11:08.460 "write_zeroes": true, 00:11:08.460 "zcopy": true, 00:11:08.460 "get_zone_info": false, 00:11:08.460 "zone_management": false, 00:11:08.460 "zone_append": false, 00:11:08.460 "compare": false, 00:11:08.460 "compare_and_write": false, 00:11:08.460 "abort": true, 00:11:08.460 "seek_hole": false, 00:11:08.460 "seek_data": false, 00:11:08.460 "copy": true, 00:11:08.460 "nvme_iov_md": false 00:11:08.460 }, 00:11:08.460 "memory_domains": [ 00:11:08.460 { 00:11:08.460 "dma_device_id": "system", 00:11:08.460 "dma_device_type": 1 00:11:08.460 }, 00:11:08.460 { 00:11:08.460 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.460 "dma_device_type": 2 00:11:08.460 } 00:11:08.460 ], 00:11:08.460 "driver_specific": { 00:11:08.460 "passthru": { 00:11:08.460 "name": "pt2", 00:11:08.460 "base_bdev_name": "malloc2" 00:11:08.460 } 00:11:08.460 } 00:11:08.460 }' 00:11:08.460 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:08.460 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:08.460 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:08.460 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:08.460 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:08.460 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:08.460 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:08.460 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:08.460 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:08.460 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:08.460 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:08.460 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:08.460 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:08.460 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:11:08.460 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:08.719 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:08.719 "name": "pt3", 00:11:08.719 "aliases": [ 00:11:08.719 "00000000-0000-0000-0000-000000000003" 00:11:08.719 ], 00:11:08.719 "product_name": "passthru", 00:11:08.719 "block_size": 512, 00:11:08.719 "num_blocks": 65536, 00:11:08.719 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:08.719 "assigned_rate_limits": { 00:11:08.719 "rw_ios_per_sec": 0, 00:11:08.719 "rw_mbytes_per_sec": 0, 00:11:08.719 "r_mbytes_per_sec": 0, 00:11:08.719 "w_mbytes_per_sec": 0 00:11:08.719 }, 00:11:08.719 "claimed": true, 00:11:08.719 "claim_type": "exclusive_write", 00:11:08.719 "zoned": false, 00:11:08.719 "supported_io_types": { 00:11:08.719 "read": true, 00:11:08.719 "write": true, 00:11:08.719 "unmap": true, 00:11:08.719 "flush": true, 00:11:08.719 "reset": true, 00:11:08.719 "nvme_admin": false, 00:11:08.719 "nvme_io": false, 00:11:08.719 "nvme_io_md": false, 00:11:08.719 "write_zeroes": true, 00:11:08.719 "zcopy": true, 00:11:08.719 "get_zone_info": false, 00:11:08.719 "zone_management": false, 00:11:08.719 "zone_append": false, 00:11:08.719 "compare": false, 00:11:08.719 "compare_and_write": false, 00:11:08.719 "abort": true, 00:11:08.719 "seek_hole": false, 00:11:08.719 "seek_data": false, 00:11:08.719 "copy": true, 00:11:08.719 "nvme_iov_md": false 00:11:08.719 }, 00:11:08.719 "memory_domains": [ 00:11:08.719 { 00:11:08.719 "dma_device_id": "system", 00:11:08.719 "dma_device_type": 1 00:11:08.719 }, 00:11:08.719 { 00:11:08.719 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.719 "dma_device_type": 2 00:11:08.719 } 00:11:08.719 ], 00:11:08.719 "driver_specific": { 00:11:08.719 "passthru": { 00:11:08.719 "name": "pt3", 00:11:08.719 "base_bdev_name": "malloc3" 00:11:08.719 } 00:11:08.719 } 00:11:08.719 }' 00:11:08.719 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:08.719 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:08.719 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:08.719 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:08.719 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:08.719 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:08.719 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:08.719 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:08.719 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:08.719 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:08.719 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:08.719 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:08.719 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:08.719 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:11:08.977 [2024-07-15 17:30:04.756917] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:08.977 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' da2096b6-42cf-11ef-96ac-773515fba644 '!=' da2096b6-42cf-11ef-96ac-773515fba644 ']' 00:11:08.977 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:11:08.977 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:08.977 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:08.977 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 55477 00:11:08.977 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 55477 ']' 00:11:08.977 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 55477 00:11:08.977 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:11:08.977 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:11:08.977 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 55477 00:11:08.977 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:11:08.977 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:11:08.977 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:11:08.977 killing process with pid 55477 00:11:08.977 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 55477' 00:11:08.977 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 55477 00:11:08.977 [2024-07-15 17:30:04.785161] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:08.977 [2024-07-15 17:30:04.785186] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:08.977 [2024-07-15 17:30:04.785199] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:08.977 [2024-07-15 17:30:04.785204] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1003b2434780 name raid_bdev1, state offline 00:11:08.977 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 55477 00:11:08.977 [2024-07-15 17:30:04.802393] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:09.235 17:30:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:11:09.235 00:11:09.235 real 0m12.193s 00:11:09.235 user 0m21.523s 00:11:09.235 sys 0m2.070s 00:11:09.235 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:09.235 ************************************ 00:11:09.235 END TEST raid_superblock_test 00:11:09.235 17:30:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.235 ************************************ 00:11:09.235 17:30:05 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:09.235 17:30:05 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:11:09.235 17:30:05 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:09.235 17:30:05 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:09.235 17:30:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:09.235 ************************************ 00:11:09.235 START TEST raid_read_error_test 00:11:09.235 ************************************ 00:11:09.235 17:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 read 00:11:09.235 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:11:09.235 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:11:09.235 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:11:09.235 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:11:09.235 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:09.236 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:11:09.236 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:09.236 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:09.236 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:11:09.236 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:09.236 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:09.236 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:11:09.236 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:09.236 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:09.236 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:09.236 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:11:09.236 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:11:09.236 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:11:09.236 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:11:09.236 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:11:09.236 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:11:09.236 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:11:09.236 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:11:09.236 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:11:09.236 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:11:09.236 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.Qc1WdYC2yS 00:11:09.236 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=55828 00:11:09.236 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 55828 /var/tmp/spdk-raid.sock 00:11:09.236 17:30:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:09.236 17:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 55828 ']' 00:11:09.236 17:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:09.236 17:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:09.236 17:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:09.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:09.236 17:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:09.236 17:30:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.236 [2024-07-15 17:30:05.034939] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:11:09.236 [2024-07-15 17:30:05.035104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:11:09.800 EAL: TSC is not safe to use in SMP mode 00:11:09.800 EAL: TSC is not invariant 00:11:09.800 [2024-07-15 17:30:05.560720] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:10.058 [2024-07-15 17:30:05.647591] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:10.058 [2024-07-15 17:30:05.649705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.058 [2024-07-15 17:30:05.650469] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.058 [2024-07-15 17:30:05.650483] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.316 17:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:10.317 17:30:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:11:10.317 17:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:10.317 17:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:10.575 BaseBdev1_malloc 00:11:10.575 17:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:10.833 true 00:11:10.833 17:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:11.092 [2024-07-15 17:30:06.794451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:11.092 [2024-07-15 17:30:06.794517] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.092 [2024-07-15 17:30:06.794547] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18f038a34780 00:11:11.092 [2024-07-15 17:30:06.794564] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.092 [2024-07-15 17:30:06.795248] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.092 [2024-07-15 17:30:06.795278] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:11.092 BaseBdev1 00:11:11.092 17:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:11.092 17:30:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:11.351 BaseBdev2_malloc 00:11:11.351 17:30:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:11.608 true 00:11:11.608 17:30:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:11.866 [2024-07-15 17:30:07.498462] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:11.866 [2024-07-15 17:30:07.498521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.866 [2024-07-15 17:30:07.498552] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18f038a34c80 00:11:11.866 [2024-07-15 17:30:07.498561] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.866 [2024-07-15 17:30:07.499248] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.866 [2024-07-15 17:30:07.499282] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:11.866 BaseBdev2 00:11:11.866 17:30:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:11.866 17:30:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:12.124 BaseBdev3_malloc 00:11:12.124 17:30:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:11:12.382 true 00:11:12.382 17:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:12.640 [2024-07-15 17:30:08.290469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:12.640 [2024-07-15 17:30:08.290532] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:12.640 [2024-07-15 17:30:08.290561] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18f038a35180 00:11:12.640 [2024-07-15 17:30:08.290570] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:12.640 [2024-07-15 17:30:08.291288] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:12.640 [2024-07-15 17:30:08.291315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:12.640 BaseBdev3 00:11:12.640 17:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:11:12.898 [2024-07-15 17:30:08.566489] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:12.898 [2024-07-15 17:30:08.567105] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:12.898 [2024-07-15 17:30:08.567132] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:12.898 [2024-07-15 17:30:08.567198] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x18f038a35400 00:11:12.898 [2024-07-15 17:30:08.567204] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:12.898 [2024-07-15 17:30:08.567244] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18f038aa0e20 00:11:12.898 [2024-07-15 17:30:08.567320] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x18f038a35400 00:11:12.898 [2024-07-15 17:30:08.567324] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x18f038a35400 00:11:12.898 [2024-07-15 17:30:08.567354] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:12.898 17:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:12.898 17:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:12.898 17:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:12.898 17:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:12.898 17:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:12.898 17:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:12.898 17:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:12.898 17:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:12.898 17:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:12.898 17:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:12.898 17:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:12.898 17:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:13.156 17:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:13.156 "name": "raid_bdev1", 00:11:13.156 "uuid": "e1cbc017-42cf-11ef-96ac-773515fba644", 00:11:13.156 "strip_size_kb": 64, 00:11:13.156 "state": "online", 00:11:13.156 "raid_level": "concat", 00:11:13.156 "superblock": true, 00:11:13.156 "num_base_bdevs": 3, 00:11:13.156 "num_base_bdevs_discovered": 3, 00:11:13.156 "num_base_bdevs_operational": 3, 00:11:13.156 "base_bdevs_list": [ 00:11:13.156 { 00:11:13.156 "name": "BaseBdev1", 00:11:13.156 "uuid": "12ca32a6-4f56-6452-bd9b-d23605c59bb3", 00:11:13.156 "is_configured": true, 00:11:13.156 "data_offset": 2048, 00:11:13.156 "data_size": 63488 00:11:13.156 }, 00:11:13.156 { 00:11:13.156 "name": "BaseBdev2", 00:11:13.156 "uuid": "5f9aa58b-812e-6755-aca0-ab82b8486985", 00:11:13.156 "is_configured": true, 00:11:13.156 "data_offset": 2048, 00:11:13.156 "data_size": 63488 00:11:13.156 }, 00:11:13.156 { 00:11:13.156 "name": "BaseBdev3", 00:11:13.156 "uuid": "3ba5a4ef-294e-4f5f-8df1-5ea2b3977623", 00:11:13.156 "is_configured": true, 00:11:13.156 "data_offset": 2048, 00:11:13.156 "data_size": 63488 00:11:13.156 } 00:11:13.156 ] 00:11:13.156 }' 00:11:13.156 17:30:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:13.156 17:30:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.417 17:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:11:13.417 17:30:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:11:13.675 [2024-07-15 17:30:09.290670] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18f038aa0ec0 00:11:14.609 17:30:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:14.867 17:30:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:11:14.867 17:30:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:11:14.867 17:30:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:14.867 17:30:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:14.867 17:30:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:14.867 17:30:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:14.867 17:30:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:14.867 17:30:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:14.867 17:30:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:14.867 17:30:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:14.867 17:30:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:14.867 17:30:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:14.867 17:30:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:14.867 17:30:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:14.867 17:30:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.124 17:30:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:15.124 "name": "raid_bdev1", 00:11:15.124 "uuid": "e1cbc017-42cf-11ef-96ac-773515fba644", 00:11:15.124 "strip_size_kb": 64, 00:11:15.124 "state": "online", 00:11:15.124 "raid_level": "concat", 00:11:15.124 "superblock": true, 00:11:15.124 "num_base_bdevs": 3, 00:11:15.124 "num_base_bdevs_discovered": 3, 00:11:15.124 "num_base_bdevs_operational": 3, 00:11:15.124 "base_bdevs_list": [ 00:11:15.124 { 00:11:15.124 "name": "BaseBdev1", 00:11:15.124 "uuid": "12ca32a6-4f56-6452-bd9b-d23605c59bb3", 00:11:15.124 "is_configured": true, 00:11:15.124 "data_offset": 2048, 00:11:15.124 "data_size": 63488 00:11:15.124 }, 00:11:15.124 { 00:11:15.124 "name": "BaseBdev2", 00:11:15.124 "uuid": "5f9aa58b-812e-6755-aca0-ab82b8486985", 00:11:15.124 "is_configured": true, 00:11:15.124 "data_offset": 2048, 00:11:15.124 "data_size": 63488 00:11:15.124 }, 00:11:15.124 { 00:11:15.124 "name": "BaseBdev3", 00:11:15.124 "uuid": "3ba5a4ef-294e-4f5f-8df1-5ea2b3977623", 00:11:15.124 "is_configured": true, 00:11:15.124 "data_offset": 2048, 00:11:15.124 "data_size": 63488 00:11:15.124 } 00:11:15.124 ] 00:11:15.124 }' 00:11:15.124 17:30:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:15.124 17:30:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.383 17:30:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:15.642 [2024-07-15 17:30:11.309829] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:15.642 [2024-07-15 17:30:11.309857] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:15.642 [2024-07-15 17:30:11.310286] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:15.642 [2024-07-15 17:30:11.310303] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.642 [2024-07-15 17:30:11.310311] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:15.642 [2024-07-15 17:30:11.310316] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18f038a35400 name raid_bdev1, state offline 00:11:15.642 0 00:11:15.642 17:30:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 55828 00:11:15.642 17:30:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 55828 ']' 00:11:15.642 17:30:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 55828 00:11:15.642 17:30:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:11:15.642 17:30:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:11:15.642 17:30:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 55828 00:11:15.642 17:30:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:11:15.642 17:30:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:11:15.642 17:30:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:11:15.642 killing process with pid 55828 00:11:15.642 17:30:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 55828' 00:11:15.642 17:30:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 55828 00:11:15.642 [2024-07-15 17:30:11.339581] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:15.642 17:30:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 55828 00:11:15.642 [2024-07-15 17:30:11.356759] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:15.900 17:30:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.Qc1WdYC2yS 00:11:15.900 17:30:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:11:15.900 17:30:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:11:15.900 17:30:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.50 00:11:15.900 17:30:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:11:15.900 17:30:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:15.900 17:30:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:15.900 17:30:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.50 != \0\.\0\0 ]] 00:11:15.900 00:11:15.900 real 0m6.524s 00:11:15.900 user 0m10.253s 00:11:15.900 sys 0m1.126s 00:11:15.900 17:30:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:15.900 17:30:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.900 ************************************ 00:11:15.900 END TEST raid_read_error_test 00:11:15.900 ************************************ 00:11:15.900 17:30:11 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:15.900 17:30:11 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:11:15.900 17:30:11 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:15.900 17:30:11 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:15.900 17:30:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:15.900 ************************************ 00:11:15.900 START TEST raid_write_error_test 00:11:15.900 ************************************ 00:11:15.900 17:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 write 00:11:15.900 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:11:15.900 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:11:15.900 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:11:15.900 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:11:15.900 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:15.900 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:11:15.900 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:15.900 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:15.900 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:11:15.900 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:15.900 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:15.900 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:11:15.900 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:15.900 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:15.900 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:15.900 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:11:15.900 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:11:15.900 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:11:15.900 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:11:15.900 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:11:15.900 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:11:15.900 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:11:15.900 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:11:15.900 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:11:15.900 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:11:15.900 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.onCYBUmZZP 00:11:15.900 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=55959 00:11:15.900 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 55959 /var/tmp/spdk-raid.sock 00:11:15.900 17:30:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:15.900 17:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 55959 ']' 00:11:15.900 17:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:15.900 17:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:15.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:15.900 17:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:15.900 17:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:15.900 17:30:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.900 [2024-07-15 17:30:11.601209] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:11:15.900 [2024-07-15 17:30:11.601415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:11:16.466 EAL: TSC is not safe to use in SMP mode 00:11:16.466 EAL: TSC is not invariant 00:11:16.466 [2024-07-15 17:30:12.141357] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:16.466 [2024-07-15 17:30:12.232723] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:16.466 [2024-07-15 17:30:12.234841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:16.466 [2024-07-15 17:30:12.235601] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:16.466 [2024-07-15 17:30:12.235616] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:17.031 17:30:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:17.031 17:30:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:11:17.031 17:30:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:17.031 17:30:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:17.288 BaseBdev1_malloc 00:11:17.288 17:30:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:17.556 true 00:11:17.556 17:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:17.821 [2024-07-15 17:30:13.447937] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:17.821 [2024-07-15 17:30:13.448013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.821 [2024-07-15 17:30:13.448057] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3f701e834780 00:11:17.821 [2024-07-15 17:30:13.448066] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.821 [2024-07-15 17:30:13.448744] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.821 [2024-07-15 17:30:13.448776] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:17.821 BaseBdev1 00:11:17.821 17:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:17.821 17:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:18.079 BaseBdev2_malloc 00:11:18.079 17:30:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:18.394 true 00:11:18.394 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:18.667 [2024-07-15 17:30:14.235968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:18.667 [2024-07-15 17:30:14.236023] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:18.667 [2024-07-15 17:30:14.236064] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3f701e834c80 00:11:18.667 [2024-07-15 17:30:14.236073] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:18.667 [2024-07-15 17:30:14.236775] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:18.667 [2024-07-15 17:30:14.236802] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:18.667 BaseBdev2 00:11:18.667 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:18.667 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:18.667 BaseBdev3_malloc 00:11:18.667 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:11:18.925 true 00:11:18.925 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:19.183 [2024-07-15 17:30:14.964028] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:19.183 [2024-07-15 17:30:14.964088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.183 [2024-07-15 17:30:14.964143] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3f701e835180 00:11:19.183 [2024-07-15 17:30:14.964154] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.183 [2024-07-15 17:30:14.964913] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.183 [2024-07-15 17:30:14.964938] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:19.183 BaseBdev3 00:11:19.183 17:30:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:11:19.441 [2024-07-15 17:30:15.252050] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:19.441 [2024-07-15 17:30:15.252666] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:19.441 [2024-07-15 17:30:15.252692] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:19.441 [2024-07-15 17:30:15.252750] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3f701e835400 00:11:19.441 [2024-07-15 17:30:15.252756] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:19.441 [2024-07-15 17:30:15.252795] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3f701e8a0e20 00:11:19.441 [2024-07-15 17:30:15.252870] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3f701e835400 00:11:19.441 [2024-07-15 17:30:15.252874] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3f701e835400 00:11:19.441 [2024-07-15 17:30:15.252901] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:19.441 17:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:19.441 17:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:19.441 17:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:19.441 17:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:19.699 17:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:19.699 17:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:19.700 17:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:19.700 17:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:19.700 17:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:19.700 17:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:19.700 17:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:19.700 17:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:19.957 17:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:19.957 "name": "raid_bdev1", 00:11:19.957 "uuid": "e5c7e2cf-42cf-11ef-96ac-773515fba644", 00:11:19.957 "strip_size_kb": 64, 00:11:19.957 "state": "online", 00:11:19.957 "raid_level": "concat", 00:11:19.957 "superblock": true, 00:11:19.957 "num_base_bdevs": 3, 00:11:19.957 "num_base_bdevs_discovered": 3, 00:11:19.957 "num_base_bdevs_operational": 3, 00:11:19.957 "base_bdevs_list": [ 00:11:19.957 { 00:11:19.957 "name": "BaseBdev1", 00:11:19.957 "uuid": "525e6fc1-b210-5056-be74-1fdc868b3ace", 00:11:19.957 "is_configured": true, 00:11:19.957 "data_offset": 2048, 00:11:19.957 "data_size": 63488 00:11:19.957 }, 00:11:19.957 { 00:11:19.957 "name": "BaseBdev2", 00:11:19.957 "uuid": "a2f4dab6-2f57-6756-9172-6b9220d6ff68", 00:11:19.957 "is_configured": true, 00:11:19.957 "data_offset": 2048, 00:11:19.957 "data_size": 63488 00:11:19.957 }, 00:11:19.957 { 00:11:19.957 "name": "BaseBdev3", 00:11:19.957 "uuid": "0e98cdf3-0d13-0851-8f8a-bc45aea03359", 00:11:19.957 "is_configured": true, 00:11:19.957 "data_offset": 2048, 00:11:19.957 "data_size": 63488 00:11:19.957 } 00:11:19.957 ] 00:11:19.957 }' 00:11:19.957 17:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:19.957 17:30:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.215 17:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:11:20.215 17:30:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:11:20.215 [2024-07-15 17:30:16.020261] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3f701e8a0ec0 00:11:21.148 17:30:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:21.712 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:11:21.712 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:11:21.712 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:21.712 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:21.712 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:21.712 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:21.712 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:21.712 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:21.712 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:21.712 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:21.712 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:21.712 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:21.712 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:21.712 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:21.712 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:21.712 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:21.712 "name": "raid_bdev1", 00:11:21.712 "uuid": "e5c7e2cf-42cf-11ef-96ac-773515fba644", 00:11:21.712 "strip_size_kb": 64, 00:11:21.712 "state": "online", 00:11:21.712 "raid_level": "concat", 00:11:21.712 "superblock": true, 00:11:21.712 "num_base_bdevs": 3, 00:11:21.712 "num_base_bdevs_discovered": 3, 00:11:21.712 "num_base_bdevs_operational": 3, 00:11:21.712 "base_bdevs_list": [ 00:11:21.712 { 00:11:21.712 "name": "BaseBdev1", 00:11:21.712 "uuid": "525e6fc1-b210-5056-be74-1fdc868b3ace", 00:11:21.712 "is_configured": true, 00:11:21.712 "data_offset": 2048, 00:11:21.712 "data_size": 63488 00:11:21.712 }, 00:11:21.712 { 00:11:21.712 "name": "BaseBdev2", 00:11:21.712 "uuid": "a2f4dab6-2f57-6756-9172-6b9220d6ff68", 00:11:21.712 "is_configured": true, 00:11:21.712 "data_offset": 2048, 00:11:21.712 "data_size": 63488 00:11:21.712 }, 00:11:21.712 { 00:11:21.712 "name": "BaseBdev3", 00:11:21.712 "uuid": "0e98cdf3-0d13-0851-8f8a-bc45aea03359", 00:11:21.712 "is_configured": true, 00:11:21.712 "data_offset": 2048, 00:11:21.712 "data_size": 63488 00:11:21.712 } 00:11:21.712 ] 00:11:21.712 }' 00:11:21.712 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:21.712 17:30:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.277 17:30:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:22.277 [2024-07-15 17:30:18.062164] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:22.277 [2024-07-15 17:30:18.062191] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:22.277 [2024-07-15 17:30:18.062536] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:22.277 [2024-07-15 17:30:18.062546] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.277 [2024-07-15 17:30:18.062554] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:22.277 [2024-07-15 17:30:18.062558] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3f701e835400 name raid_bdev1, state offline 00:11:22.277 0 00:11:22.277 17:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 55959 00:11:22.277 17:30:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 55959 ']' 00:11:22.277 17:30:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 55959 00:11:22.277 17:30:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:11:22.277 17:30:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:11:22.277 17:30:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 55959 00:11:22.277 17:30:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:11:22.277 17:30:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:11:22.277 17:30:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:11:22.277 killing process with pid 55959 00:11:22.277 17:30:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 55959' 00:11:22.277 17:30:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 55959 00:11:22.277 [2024-07-15 17:30:18.088701] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:22.277 17:30:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 55959 00:11:22.277 [2024-07-15 17:30:18.106083] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:22.535 17:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.onCYBUmZZP 00:11:22.535 17:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:11:22.535 17:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:11:22.535 17:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.49 00:11:22.535 17:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:11:22.535 17:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:22.535 17:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:22.535 17:30:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.49 != \0\.\0\0 ]] 00:11:22.535 00:11:22.535 real 0m6.705s 00:11:22.535 user 0m10.468s 00:11:22.535 sys 0m1.209s 00:11:22.535 17:30:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:22.535 ************************************ 00:11:22.535 END TEST raid_write_error_test 00:11:22.535 ************************************ 00:11:22.535 17:30:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.535 17:30:18 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:22.535 17:30:18 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:11:22.535 17:30:18 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:11:22.535 17:30:18 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:22.535 17:30:18 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:22.535 17:30:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:22.535 ************************************ 00:11:22.535 START TEST raid_state_function_test 00:11:22.535 ************************************ 00:11:22.535 17:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 false 00:11:22.535 17:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:11:22.535 17:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:11:22.535 17:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:11:22.535 17:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:22.535 17:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:22.535 17:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:22.535 17:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:22.535 17:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:22.535 17:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:22.535 17:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:22.535 17:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:22.535 17:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:22.535 17:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:11:22.535 17:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:22.535 17:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:22.535 17:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:22.535 17:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:22.536 17:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:22.536 17:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:22.536 17:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:22.536 17:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:22.536 17:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:11:22.536 17:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:11:22.536 17:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:11:22.536 17:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:11:22.536 17:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=56092 00:11:22.536 Process raid pid: 56092 00:11:22.536 17:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 56092' 00:11:22.536 17:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 56092 /var/tmp/spdk-raid.sock 00:11:22.536 17:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 56092 ']' 00:11:22.536 17:30:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:22.536 17:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:22.536 17:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:22.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:22.536 17:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:22.536 17:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:22.536 17:30:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.536 [2024-07-15 17:30:18.351296] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:11:22.536 [2024-07-15 17:30:18.351482] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:11:23.102 EAL: TSC is not safe to use in SMP mode 00:11:23.102 EAL: TSC is not invariant 00:11:23.102 [2024-07-15 17:30:18.892343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:23.360 [2024-07-15 17:30:18.982129] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:23.360 [2024-07-15 17:30:18.984210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.360 [2024-07-15 17:30:18.984996] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.360 [2024-07-15 17:30:18.985010] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:23.618 17:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:23.618 17:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:11:23.618 17:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:23.877 [2024-07-15 17:30:19.581936] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:23.877 [2024-07-15 17:30:19.581997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:23.877 [2024-07-15 17:30:19.582003] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:23.877 [2024-07-15 17:30:19.582028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:23.877 [2024-07-15 17:30:19.582031] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:23.877 [2024-07-15 17:30:19.582038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:23.877 17:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:23.877 17:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:23.877 17:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:23.877 17:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:23.877 17:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:23.877 17:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:23.877 17:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:23.877 17:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:23.877 17:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:23.877 17:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:23.877 17:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:23.877 17:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:24.134 17:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:24.134 "name": "Existed_Raid", 00:11:24.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.134 "strip_size_kb": 0, 00:11:24.134 "state": "configuring", 00:11:24.134 "raid_level": "raid1", 00:11:24.134 "superblock": false, 00:11:24.134 "num_base_bdevs": 3, 00:11:24.134 "num_base_bdevs_discovered": 0, 00:11:24.134 "num_base_bdevs_operational": 3, 00:11:24.134 "base_bdevs_list": [ 00:11:24.134 { 00:11:24.134 "name": "BaseBdev1", 00:11:24.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.134 "is_configured": false, 00:11:24.134 "data_offset": 0, 00:11:24.134 "data_size": 0 00:11:24.134 }, 00:11:24.134 { 00:11:24.134 "name": "BaseBdev2", 00:11:24.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.134 "is_configured": false, 00:11:24.134 "data_offset": 0, 00:11:24.134 "data_size": 0 00:11:24.134 }, 00:11:24.134 { 00:11:24.134 "name": "BaseBdev3", 00:11:24.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:24.134 "is_configured": false, 00:11:24.134 "data_offset": 0, 00:11:24.134 "data_size": 0 00:11:24.134 } 00:11:24.134 ] 00:11:24.134 }' 00:11:24.134 17:30:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:24.134 17:30:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.392 17:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:24.649 [2024-07-15 17:30:20.465949] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:24.649 [2024-07-15 17:30:20.465976] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x8585034500 name Existed_Raid, state configuring 00:11:24.906 17:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:25.165 [2024-07-15 17:30:20.741965] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:25.165 [2024-07-15 17:30:20.742009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:25.165 [2024-07-15 17:30:20.742014] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:25.165 [2024-07-15 17:30:20.742038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:25.165 [2024-07-15 17:30:20.742041] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:25.165 [2024-07-15 17:30:20.742047] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:25.165 17:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:25.165 [2024-07-15 17:30:20.983013] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:25.165 BaseBdev1 00:11:25.423 17:30:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:25.423 17:30:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:11:25.423 17:30:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:25.423 17:30:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:25.423 17:30:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:25.423 17:30:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:25.423 17:30:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:25.682 17:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:25.978 [ 00:11:25.978 { 00:11:25.978 "name": "BaseBdev1", 00:11:25.978 "aliases": [ 00:11:25.978 "e932353e-42cf-11ef-96ac-773515fba644" 00:11:25.978 ], 00:11:25.978 "product_name": "Malloc disk", 00:11:25.978 "block_size": 512, 00:11:25.978 "num_blocks": 65536, 00:11:25.978 "uuid": "e932353e-42cf-11ef-96ac-773515fba644", 00:11:25.978 "assigned_rate_limits": { 00:11:25.978 "rw_ios_per_sec": 0, 00:11:25.978 "rw_mbytes_per_sec": 0, 00:11:25.978 "r_mbytes_per_sec": 0, 00:11:25.978 "w_mbytes_per_sec": 0 00:11:25.978 }, 00:11:25.978 "claimed": true, 00:11:25.978 "claim_type": "exclusive_write", 00:11:25.978 "zoned": false, 00:11:25.978 "supported_io_types": { 00:11:25.978 "read": true, 00:11:25.978 "write": true, 00:11:25.978 "unmap": true, 00:11:25.978 "flush": true, 00:11:25.978 "reset": true, 00:11:25.978 "nvme_admin": false, 00:11:25.978 "nvme_io": false, 00:11:25.978 "nvme_io_md": false, 00:11:25.978 "write_zeroes": true, 00:11:25.978 "zcopy": true, 00:11:25.978 "get_zone_info": false, 00:11:25.978 "zone_management": false, 00:11:25.978 "zone_append": false, 00:11:25.978 "compare": false, 00:11:25.978 "compare_and_write": false, 00:11:25.978 "abort": true, 00:11:25.978 "seek_hole": false, 00:11:25.978 "seek_data": false, 00:11:25.978 "copy": true, 00:11:25.978 "nvme_iov_md": false 00:11:25.978 }, 00:11:25.978 "memory_domains": [ 00:11:25.978 { 00:11:25.978 "dma_device_id": "system", 00:11:25.978 "dma_device_type": 1 00:11:25.978 }, 00:11:25.978 { 00:11:25.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:25.978 "dma_device_type": 2 00:11:25.978 } 00:11:25.978 ], 00:11:25.978 "driver_specific": {} 00:11:25.978 } 00:11:25.978 ] 00:11:25.978 17:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:25.978 17:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:25.978 17:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:25.978 17:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:25.978 17:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:25.978 17:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:25.978 17:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:25.978 17:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:25.978 17:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:25.978 17:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:25.978 17:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:25.978 17:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:25.978 17:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.978 17:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:25.978 "name": "Existed_Raid", 00:11:25.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.978 "strip_size_kb": 0, 00:11:25.978 "state": "configuring", 00:11:25.978 "raid_level": "raid1", 00:11:25.978 "superblock": false, 00:11:25.978 "num_base_bdevs": 3, 00:11:25.978 "num_base_bdevs_discovered": 1, 00:11:25.978 "num_base_bdevs_operational": 3, 00:11:25.978 "base_bdevs_list": [ 00:11:25.978 { 00:11:25.978 "name": "BaseBdev1", 00:11:25.978 "uuid": "e932353e-42cf-11ef-96ac-773515fba644", 00:11:25.978 "is_configured": true, 00:11:25.978 "data_offset": 0, 00:11:25.978 "data_size": 65536 00:11:25.978 }, 00:11:25.978 { 00:11:25.978 "name": "BaseBdev2", 00:11:25.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.978 "is_configured": false, 00:11:25.978 "data_offset": 0, 00:11:25.978 "data_size": 0 00:11:25.978 }, 00:11:25.978 { 00:11:25.978 "name": "BaseBdev3", 00:11:25.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.978 "is_configured": false, 00:11:25.978 "data_offset": 0, 00:11:25.978 "data_size": 0 00:11:25.978 } 00:11:25.978 ] 00:11:25.978 }' 00:11:25.978 17:30:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:25.978 17:30:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.544 17:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:26.544 [2024-07-15 17:30:22.318076] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:26.544 [2024-07-15 17:30:22.318109] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x8585034500 name Existed_Raid, state configuring 00:11:26.544 17:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:26.803 [2024-07-15 17:30:22.550098] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:26.803 [2024-07-15 17:30:22.550936] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:26.803 [2024-07-15 17:30:22.550972] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:26.803 [2024-07-15 17:30:22.550978] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:26.803 [2024-07-15 17:30:22.550986] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:26.803 17:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:26.803 17:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:26.803 17:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:26.803 17:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:26.803 17:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:26.803 17:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:26.803 17:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:26.803 17:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:26.803 17:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:26.803 17:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:26.803 17:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:26.803 17:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:26.803 17:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:26.803 17:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.062 17:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:27.062 "name": "Existed_Raid", 00:11:27.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.062 "strip_size_kb": 0, 00:11:27.062 "state": "configuring", 00:11:27.062 "raid_level": "raid1", 00:11:27.062 "superblock": false, 00:11:27.062 "num_base_bdevs": 3, 00:11:27.062 "num_base_bdevs_discovered": 1, 00:11:27.062 "num_base_bdevs_operational": 3, 00:11:27.062 "base_bdevs_list": [ 00:11:27.062 { 00:11:27.062 "name": "BaseBdev1", 00:11:27.062 "uuid": "e932353e-42cf-11ef-96ac-773515fba644", 00:11:27.062 "is_configured": true, 00:11:27.062 "data_offset": 0, 00:11:27.062 "data_size": 65536 00:11:27.062 }, 00:11:27.062 { 00:11:27.062 "name": "BaseBdev2", 00:11:27.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.062 "is_configured": false, 00:11:27.062 "data_offset": 0, 00:11:27.062 "data_size": 0 00:11:27.062 }, 00:11:27.062 { 00:11:27.062 "name": "BaseBdev3", 00:11:27.062 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.062 "is_configured": false, 00:11:27.062 "data_offset": 0, 00:11:27.062 "data_size": 0 00:11:27.062 } 00:11:27.062 ] 00:11:27.062 }' 00:11:27.062 17:30:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:27.062 17:30:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.638 17:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:27.896 [2024-07-15 17:30:23.486268] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:27.896 BaseBdev2 00:11:27.896 17:30:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:27.896 17:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:11:27.896 17:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:27.896 17:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:27.896 17:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:27.896 17:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:27.896 17:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:28.154 17:30:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:28.413 [ 00:11:28.413 { 00:11:28.413 "name": "BaseBdev2", 00:11:28.413 "aliases": [ 00:11:28.413 "eab04efa-42cf-11ef-96ac-773515fba644" 00:11:28.413 ], 00:11:28.413 "product_name": "Malloc disk", 00:11:28.413 "block_size": 512, 00:11:28.413 "num_blocks": 65536, 00:11:28.413 "uuid": "eab04efa-42cf-11ef-96ac-773515fba644", 00:11:28.413 "assigned_rate_limits": { 00:11:28.413 "rw_ios_per_sec": 0, 00:11:28.413 "rw_mbytes_per_sec": 0, 00:11:28.413 "r_mbytes_per_sec": 0, 00:11:28.413 "w_mbytes_per_sec": 0 00:11:28.413 }, 00:11:28.413 "claimed": true, 00:11:28.413 "claim_type": "exclusive_write", 00:11:28.413 "zoned": false, 00:11:28.413 "supported_io_types": { 00:11:28.413 "read": true, 00:11:28.413 "write": true, 00:11:28.413 "unmap": true, 00:11:28.413 "flush": true, 00:11:28.414 "reset": true, 00:11:28.414 "nvme_admin": false, 00:11:28.414 "nvme_io": false, 00:11:28.414 "nvme_io_md": false, 00:11:28.414 "write_zeroes": true, 00:11:28.414 "zcopy": true, 00:11:28.414 "get_zone_info": false, 00:11:28.414 "zone_management": false, 00:11:28.414 "zone_append": false, 00:11:28.414 "compare": false, 00:11:28.414 "compare_and_write": false, 00:11:28.414 "abort": true, 00:11:28.414 "seek_hole": false, 00:11:28.414 "seek_data": false, 00:11:28.414 "copy": true, 00:11:28.414 "nvme_iov_md": false 00:11:28.414 }, 00:11:28.414 "memory_domains": [ 00:11:28.414 { 00:11:28.414 "dma_device_id": "system", 00:11:28.414 "dma_device_type": 1 00:11:28.414 }, 00:11:28.414 { 00:11:28.414 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.414 "dma_device_type": 2 00:11:28.414 } 00:11:28.414 ], 00:11:28.414 "driver_specific": {} 00:11:28.414 } 00:11:28.414 ] 00:11:28.414 17:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:28.414 17:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:28.414 17:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:28.414 17:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:28.414 17:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:28.414 17:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:28.414 17:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:28.414 17:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:28.414 17:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:28.414 17:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:28.414 17:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:28.414 17:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:28.414 17:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:28.414 17:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:28.414 17:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.414 17:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:28.414 "name": "Existed_Raid", 00:11:28.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.414 "strip_size_kb": 0, 00:11:28.414 "state": "configuring", 00:11:28.414 "raid_level": "raid1", 00:11:28.414 "superblock": false, 00:11:28.414 "num_base_bdevs": 3, 00:11:28.414 "num_base_bdevs_discovered": 2, 00:11:28.414 "num_base_bdevs_operational": 3, 00:11:28.414 "base_bdevs_list": [ 00:11:28.414 { 00:11:28.414 "name": "BaseBdev1", 00:11:28.414 "uuid": "e932353e-42cf-11ef-96ac-773515fba644", 00:11:28.414 "is_configured": true, 00:11:28.414 "data_offset": 0, 00:11:28.414 "data_size": 65536 00:11:28.414 }, 00:11:28.414 { 00:11:28.414 "name": "BaseBdev2", 00:11:28.414 "uuid": "eab04efa-42cf-11ef-96ac-773515fba644", 00:11:28.414 "is_configured": true, 00:11:28.414 "data_offset": 0, 00:11:28.414 "data_size": 65536 00:11:28.414 }, 00:11:28.414 { 00:11:28.414 "name": "BaseBdev3", 00:11:28.414 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.414 "is_configured": false, 00:11:28.414 "data_offset": 0, 00:11:28.414 "data_size": 0 00:11:28.414 } 00:11:28.414 ] 00:11:28.414 }' 00:11:28.414 17:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:28.414 17:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.980 17:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:29.238 [2024-07-15 17:30:24.822316] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:29.238 [2024-07-15 17:30:24.822346] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x8585034a00 00:11:29.238 [2024-07-15 17:30:24.822350] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:29.238 [2024-07-15 17:30:24.822372] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x8585097e20 00:11:29.238 [2024-07-15 17:30:24.822471] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x8585034a00 00:11:29.238 [2024-07-15 17:30:24.822476] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x8585034a00 00:11:29.238 [2024-07-15 17:30:24.822513] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.238 BaseBdev3 00:11:29.238 17:30:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:11:29.238 17:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:11:29.238 17:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:29.238 17:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:29.238 17:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:29.238 17:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:29.238 17:30:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:29.497 17:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:29.497 [ 00:11:29.497 { 00:11:29.497 "name": "BaseBdev3", 00:11:29.497 "aliases": [ 00:11:29.497 "eb7c2cfe-42cf-11ef-96ac-773515fba644" 00:11:29.497 ], 00:11:29.497 "product_name": "Malloc disk", 00:11:29.497 "block_size": 512, 00:11:29.497 "num_blocks": 65536, 00:11:29.497 "uuid": "eb7c2cfe-42cf-11ef-96ac-773515fba644", 00:11:29.497 "assigned_rate_limits": { 00:11:29.497 "rw_ios_per_sec": 0, 00:11:29.497 "rw_mbytes_per_sec": 0, 00:11:29.497 "r_mbytes_per_sec": 0, 00:11:29.497 "w_mbytes_per_sec": 0 00:11:29.497 }, 00:11:29.497 "claimed": true, 00:11:29.497 "claim_type": "exclusive_write", 00:11:29.497 "zoned": false, 00:11:29.497 "supported_io_types": { 00:11:29.497 "read": true, 00:11:29.497 "write": true, 00:11:29.497 "unmap": true, 00:11:29.497 "flush": true, 00:11:29.497 "reset": true, 00:11:29.497 "nvme_admin": false, 00:11:29.497 "nvme_io": false, 00:11:29.497 "nvme_io_md": false, 00:11:29.497 "write_zeroes": true, 00:11:29.497 "zcopy": true, 00:11:29.497 "get_zone_info": false, 00:11:29.497 "zone_management": false, 00:11:29.497 "zone_append": false, 00:11:29.497 "compare": false, 00:11:29.497 "compare_and_write": false, 00:11:29.497 "abort": true, 00:11:29.497 "seek_hole": false, 00:11:29.497 "seek_data": false, 00:11:29.497 "copy": true, 00:11:29.497 "nvme_iov_md": false 00:11:29.497 }, 00:11:29.497 "memory_domains": [ 00:11:29.497 { 00:11:29.497 "dma_device_id": "system", 00:11:29.497 "dma_device_type": 1 00:11:29.497 }, 00:11:29.497 { 00:11:29.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.497 "dma_device_type": 2 00:11:29.497 } 00:11:29.497 ], 00:11:29.497 "driver_specific": {} 00:11:29.497 } 00:11:29.497 ] 00:11:29.497 17:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:29.497 17:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:29.497 17:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:29.497 17:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:29.497 17:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:29.497 17:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:29.497 17:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:29.497 17:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:29.497 17:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:29.497 17:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:29.497 17:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:29.497 17:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:29.497 17:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:29.497 17:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:29.497 17:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.779 17:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:29.779 "name": "Existed_Raid", 00:11:29.779 "uuid": "eb7c3344-42cf-11ef-96ac-773515fba644", 00:11:29.779 "strip_size_kb": 0, 00:11:29.779 "state": "online", 00:11:29.779 "raid_level": "raid1", 00:11:29.779 "superblock": false, 00:11:29.779 "num_base_bdevs": 3, 00:11:29.779 "num_base_bdevs_discovered": 3, 00:11:29.779 "num_base_bdevs_operational": 3, 00:11:29.779 "base_bdevs_list": [ 00:11:29.779 { 00:11:29.779 "name": "BaseBdev1", 00:11:29.779 "uuid": "e932353e-42cf-11ef-96ac-773515fba644", 00:11:29.779 "is_configured": true, 00:11:29.779 "data_offset": 0, 00:11:29.779 "data_size": 65536 00:11:29.779 }, 00:11:29.779 { 00:11:29.779 "name": "BaseBdev2", 00:11:29.779 "uuid": "eab04efa-42cf-11ef-96ac-773515fba644", 00:11:29.779 "is_configured": true, 00:11:29.779 "data_offset": 0, 00:11:29.779 "data_size": 65536 00:11:29.779 }, 00:11:29.779 { 00:11:29.779 "name": "BaseBdev3", 00:11:29.779 "uuid": "eb7c2cfe-42cf-11ef-96ac-773515fba644", 00:11:29.779 "is_configured": true, 00:11:29.779 "data_offset": 0, 00:11:29.779 "data_size": 65536 00:11:29.779 } 00:11:29.779 ] 00:11:29.780 }' 00:11:29.780 17:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:29.780 17:30:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.345 17:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:30.345 17:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:30.345 17:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:30.345 17:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:30.345 17:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:30.345 17:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:30.345 17:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:30.345 17:30:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:30.603 [2024-07-15 17:30:26.182314] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:30.603 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:30.603 "name": "Existed_Raid", 00:11:30.603 "aliases": [ 00:11:30.603 "eb7c3344-42cf-11ef-96ac-773515fba644" 00:11:30.603 ], 00:11:30.603 "product_name": "Raid Volume", 00:11:30.603 "block_size": 512, 00:11:30.603 "num_blocks": 65536, 00:11:30.603 "uuid": "eb7c3344-42cf-11ef-96ac-773515fba644", 00:11:30.603 "assigned_rate_limits": { 00:11:30.603 "rw_ios_per_sec": 0, 00:11:30.603 "rw_mbytes_per_sec": 0, 00:11:30.603 "r_mbytes_per_sec": 0, 00:11:30.603 "w_mbytes_per_sec": 0 00:11:30.603 }, 00:11:30.603 "claimed": false, 00:11:30.603 "zoned": false, 00:11:30.603 "supported_io_types": { 00:11:30.603 "read": true, 00:11:30.603 "write": true, 00:11:30.603 "unmap": false, 00:11:30.603 "flush": false, 00:11:30.603 "reset": true, 00:11:30.603 "nvme_admin": false, 00:11:30.603 "nvme_io": false, 00:11:30.603 "nvme_io_md": false, 00:11:30.603 "write_zeroes": true, 00:11:30.603 "zcopy": false, 00:11:30.603 "get_zone_info": false, 00:11:30.603 "zone_management": false, 00:11:30.603 "zone_append": false, 00:11:30.603 "compare": false, 00:11:30.603 "compare_and_write": false, 00:11:30.603 "abort": false, 00:11:30.603 "seek_hole": false, 00:11:30.603 "seek_data": false, 00:11:30.603 "copy": false, 00:11:30.603 "nvme_iov_md": false 00:11:30.603 }, 00:11:30.603 "memory_domains": [ 00:11:30.603 { 00:11:30.603 "dma_device_id": "system", 00:11:30.603 "dma_device_type": 1 00:11:30.603 }, 00:11:30.603 { 00:11:30.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.603 "dma_device_type": 2 00:11:30.603 }, 00:11:30.603 { 00:11:30.603 "dma_device_id": "system", 00:11:30.603 "dma_device_type": 1 00:11:30.603 }, 00:11:30.603 { 00:11:30.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.603 "dma_device_type": 2 00:11:30.603 }, 00:11:30.603 { 00:11:30.603 "dma_device_id": "system", 00:11:30.603 "dma_device_type": 1 00:11:30.603 }, 00:11:30.603 { 00:11:30.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.603 "dma_device_type": 2 00:11:30.603 } 00:11:30.603 ], 00:11:30.603 "driver_specific": { 00:11:30.603 "raid": { 00:11:30.603 "uuid": "eb7c3344-42cf-11ef-96ac-773515fba644", 00:11:30.603 "strip_size_kb": 0, 00:11:30.603 "state": "online", 00:11:30.603 "raid_level": "raid1", 00:11:30.603 "superblock": false, 00:11:30.603 "num_base_bdevs": 3, 00:11:30.603 "num_base_bdevs_discovered": 3, 00:11:30.603 "num_base_bdevs_operational": 3, 00:11:30.603 "base_bdevs_list": [ 00:11:30.603 { 00:11:30.603 "name": "BaseBdev1", 00:11:30.603 "uuid": "e932353e-42cf-11ef-96ac-773515fba644", 00:11:30.603 "is_configured": true, 00:11:30.603 "data_offset": 0, 00:11:30.603 "data_size": 65536 00:11:30.603 }, 00:11:30.603 { 00:11:30.603 "name": "BaseBdev2", 00:11:30.603 "uuid": "eab04efa-42cf-11ef-96ac-773515fba644", 00:11:30.604 "is_configured": true, 00:11:30.604 "data_offset": 0, 00:11:30.604 "data_size": 65536 00:11:30.604 }, 00:11:30.604 { 00:11:30.604 "name": "BaseBdev3", 00:11:30.604 "uuid": "eb7c2cfe-42cf-11ef-96ac-773515fba644", 00:11:30.604 "is_configured": true, 00:11:30.604 "data_offset": 0, 00:11:30.604 "data_size": 65536 00:11:30.604 } 00:11:30.604 ] 00:11:30.604 } 00:11:30.604 } 00:11:30.604 }' 00:11:30.604 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:30.604 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:30.604 BaseBdev2 00:11:30.604 BaseBdev3' 00:11:30.604 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:30.604 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:30.604 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:30.862 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:30.862 "name": "BaseBdev1", 00:11:30.862 "aliases": [ 00:11:30.862 "e932353e-42cf-11ef-96ac-773515fba644" 00:11:30.862 ], 00:11:30.862 "product_name": "Malloc disk", 00:11:30.862 "block_size": 512, 00:11:30.862 "num_blocks": 65536, 00:11:30.862 "uuid": "e932353e-42cf-11ef-96ac-773515fba644", 00:11:30.862 "assigned_rate_limits": { 00:11:30.862 "rw_ios_per_sec": 0, 00:11:30.862 "rw_mbytes_per_sec": 0, 00:11:30.862 "r_mbytes_per_sec": 0, 00:11:30.862 "w_mbytes_per_sec": 0 00:11:30.862 }, 00:11:30.862 "claimed": true, 00:11:30.862 "claim_type": "exclusive_write", 00:11:30.862 "zoned": false, 00:11:30.862 "supported_io_types": { 00:11:30.862 "read": true, 00:11:30.862 "write": true, 00:11:30.862 "unmap": true, 00:11:30.862 "flush": true, 00:11:30.862 "reset": true, 00:11:30.862 "nvme_admin": false, 00:11:30.862 "nvme_io": false, 00:11:30.862 "nvme_io_md": false, 00:11:30.862 "write_zeroes": true, 00:11:30.862 "zcopy": true, 00:11:30.862 "get_zone_info": false, 00:11:30.862 "zone_management": false, 00:11:30.862 "zone_append": false, 00:11:30.862 "compare": false, 00:11:30.862 "compare_and_write": false, 00:11:30.862 "abort": true, 00:11:30.862 "seek_hole": false, 00:11:30.862 "seek_data": false, 00:11:30.862 "copy": true, 00:11:30.862 "nvme_iov_md": false 00:11:30.862 }, 00:11:30.862 "memory_domains": [ 00:11:30.862 { 00:11:30.862 "dma_device_id": "system", 00:11:30.862 "dma_device_type": 1 00:11:30.862 }, 00:11:30.862 { 00:11:30.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.862 "dma_device_type": 2 00:11:30.862 } 00:11:30.862 ], 00:11:30.862 "driver_specific": {} 00:11:30.862 }' 00:11:30.862 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:30.862 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:30.862 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:30.862 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:30.862 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:30.862 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:30.862 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:30.862 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:30.862 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:30.862 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:30.862 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:30.862 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:30.862 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:30.863 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:30.863 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:31.121 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:31.121 "name": "BaseBdev2", 00:11:31.121 "aliases": [ 00:11:31.121 "eab04efa-42cf-11ef-96ac-773515fba644" 00:11:31.121 ], 00:11:31.121 "product_name": "Malloc disk", 00:11:31.121 "block_size": 512, 00:11:31.121 "num_blocks": 65536, 00:11:31.121 "uuid": "eab04efa-42cf-11ef-96ac-773515fba644", 00:11:31.121 "assigned_rate_limits": { 00:11:31.121 "rw_ios_per_sec": 0, 00:11:31.121 "rw_mbytes_per_sec": 0, 00:11:31.121 "r_mbytes_per_sec": 0, 00:11:31.121 "w_mbytes_per_sec": 0 00:11:31.121 }, 00:11:31.121 "claimed": true, 00:11:31.121 "claim_type": "exclusive_write", 00:11:31.121 "zoned": false, 00:11:31.121 "supported_io_types": { 00:11:31.121 "read": true, 00:11:31.121 "write": true, 00:11:31.121 "unmap": true, 00:11:31.121 "flush": true, 00:11:31.121 "reset": true, 00:11:31.121 "nvme_admin": false, 00:11:31.121 "nvme_io": false, 00:11:31.121 "nvme_io_md": false, 00:11:31.121 "write_zeroes": true, 00:11:31.121 "zcopy": true, 00:11:31.121 "get_zone_info": false, 00:11:31.121 "zone_management": false, 00:11:31.121 "zone_append": false, 00:11:31.121 "compare": false, 00:11:31.121 "compare_and_write": false, 00:11:31.121 "abort": true, 00:11:31.121 "seek_hole": false, 00:11:31.121 "seek_data": false, 00:11:31.121 "copy": true, 00:11:31.121 "nvme_iov_md": false 00:11:31.121 }, 00:11:31.121 "memory_domains": [ 00:11:31.121 { 00:11:31.121 "dma_device_id": "system", 00:11:31.121 "dma_device_type": 1 00:11:31.121 }, 00:11:31.121 { 00:11:31.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.121 "dma_device_type": 2 00:11:31.121 } 00:11:31.121 ], 00:11:31.121 "driver_specific": {} 00:11:31.121 }' 00:11:31.121 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:31.121 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:31.121 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:31.121 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:31.121 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:31.121 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:31.121 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:31.121 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:31.121 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:31.121 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:31.121 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:31.121 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:31.121 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:31.121 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:31.121 17:30:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:31.703 17:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:31.703 "name": "BaseBdev3", 00:11:31.703 "aliases": [ 00:11:31.703 "eb7c2cfe-42cf-11ef-96ac-773515fba644" 00:11:31.703 ], 00:11:31.703 "product_name": "Malloc disk", 00:11:31.703 "block_size": 512, 00:11:31.703 "num_blocks": 65536, 00:11:31.703 "uuid": "eb7c2cfe-42cf-11ef-96ac-773515fba644", 00:11:31.703 "assigned_rate_limits": { 00:11:31.703 "rw_ios_per_sec": 0, 00:11:31.703 "rw_mbytes_per_sec": 0, 00:11:31.703 "r_mbytes_per_sec": 0, 00:11:31.703 "w_mbytes_per_sec": 0 00:11:31.703 }, 00:11:31.703 "claimed": true, 00:11:31.703 "claim_type": "exclusive_write", 00:11:31.703 "zoned": false, 00:11:31.703 "supported_io_types": { 00:11:31.703 "read": true, 00:11:31.703 "write": true, 00:11:31.703 "unmap": true, 00:11:31.703 "flush": true, 00:11:31.703 "reset": true, 00:11:31.703 "nvme_admin": false, 00:11:31.703 "nvme_io": false, 00:11:31.703 "nvme_io_md": false, 00:11:31.703 "write_zeroes": true, 00:11:31.703 "zcopy": true, 00:11:31.703 "get_zone_info": false, 00:11:31.703 "zone_management": false, 00:11:31.703 "zone_append": false, 00:11:31.703 "compare": false, 00:11:31.703 "compare_and_write": false, 00:11:31.703 "abort": true, 00:11:31.703 "seek_hole": false, 00:11:31.703 "seek_data": false, 00:11:31.703 "copy": true, 00:11:31.703 "nvme_iov_md": false 00:11:31.703 }, 00:11:31.703 "memory_domains": [ 00:11:31.703 { 00:11:31.703 "dma_device_id": "system", 00:11:31.703 "dma_device_type": 1 00:11:31.703 }, 00:11:31.703 { 00:11:31.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.703 "dma_device_type": 2 00:11:31.703 } 00:11:31.703 ], 00:11:31.703 "driver_specific": {} 00:11:31.703 }' 00:11:31.703 17:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:31.703 17:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:31.703 17:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:31.703 17:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:31.703 17:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:31.703 17:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:31.703 17:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:31.703 17:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:31.703 17:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:31.703 17:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:31.703 17:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:31.703 17:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:31.703 17:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:31.963 [2024-07-15 17:30:27.570354] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:31.963 17:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:31.963 17:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:11:31.963 17:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:31.963 17:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:11:31.963 17:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:11:31.963 17:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:31.963 17:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:31.963 17:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:31.963 17:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:31.963 17:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:31.963 17:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:31.963 17:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:31.963 17:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:31.963 17:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:31.963 17:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:31.963 17:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:31.963 17:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.220 17:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:32.220 "name": "Existed_Raid", 00:11:32.220 "uuid": "eb7c3344-42cf-11ef-96ac-773515fba644", 00:11:32.220 "strip_size_kb": 0, 00:11:32.220 "state": "online", 00:11:32.220 "raid_level": "raid1", 00:11:32.220 "superblock": false, 00:11:32.220 "num_base_bdevs": 3, 00:11:32.220 "num_base_bdevs_discovered": 2, 00:11:32.220 "num_base_bdevs_operational": 2, 00:11:32.220 "base_bdevs_list": [ 00:11:32.220 { 00:11:32.220 "name": null, 00:11:32.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:32.220 "is_configured": false, 00:11:32.220 "data_offset": 0, 00:11:32.220 "data_size": 65536 00:11:32.220 }, 00:11:32.220 { 00:11:32.220 "name": "BaseBdev2", 00:11:32.220 "uuid": "eab04efa-42cf-11ef-96ac-773515fba644", 00:11:32.220 "is_configured": true, 00:11:32.220 "data_offset": 0, 00:11:32.220 "data_size": 65536 00:11:32.220 }, 00:11:32.220 { 00:11:32.220 "name": "BaseBdev3", 00:11:32.220 "uuid": "eb7c2cfe-42cf-11ef-96ac-773515fba644", 00:11:32.220 "is_configured": true, 00:11:32.220 "data_offset": 0, 00:11:32.220 "data_size": 65536 00:11:32.220 } 00:11:32.220 ] 00:11:32.220 }' 00:11:32.221 17:30:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:32.221 17:30:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.479 17:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:32.479 17:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:32.479 17:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:32.479 17:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:32.737 17:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:32.737 17:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:32.737 17:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:32.996 [2024-07-15 17:30:28.740580] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:32.996 17:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:32.996 17:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:32.996 17:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:32.996 17:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:33.254 17:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:33.254 17:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:33.254 17:30:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:33.523 [2024-07-15 17:30:29.314796] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:33.523 [2024-07-15 17:30:29.314850] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:33.523 [2024-07-15 17:30:29.321259] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:33.523 [2024-07-15 17:30:29.321275] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:33.523 [2024-07-15 17:30:29.321279] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x8585034a00 name Existed_Raid, state offline 00:11:33.523 17:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:33.523 17:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:33.523 17:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:33.523 17:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:33.785 17:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:33.785 17:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:33.785 17:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:11:33.785 17:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:11:33.785 17:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:33.785 17:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:34.044 BaseBdev2 00:11:34.303 17:30:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:11:34.303 17:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:11:34.303 17:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:34.303 17:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:34.303 17:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:34.303 17:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:34.303 17:30:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:34.303 17:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:34.560 [ 00:11:34.560 { 00:11:34.560 "name": "BaseBdev2", 00:11:34.560 "aliases": [ 00:11:34.560 "ee7e4a0a-42cf-11ef-96ac-773515fba644" 00:11:34.560 ], 00:11:34.560 "product_name": "Malloc disk", 00:11:34.560 "block_size": 512, 00:11:34.560 "num_blocks": 65536, 00:11:34.560 "uuid": "ee7e4a0a-42cf-11ef-96ac-773515fba644", 00:11:34.560 "assigned_rate_limits": { 00:11:34.560 "rw_ios_per_sec": 0, 00:11:34.560 "rw_mbytes_per_sec": 0, 00:11:34.560 "r_mbytes_per_sec": 0, 00:11:34.560 "w_mbytes_per_sec": 0 00:11:34.560 }, 00:11:34.560 "claimed": false, 00:11:34.560 "zoned": false, 00:11:34.560 "supported_io_types": { 00:11:34.560 "read": true, 00:11:34.560 "write": true, 00:11:34.560 "unmap": true, 00:11:34.560 "flush": true, 00:11:34.560 "reset": true, 00:11:34.560 "nvme_admin": false, 00:11:34.560 "nvme_io": false, 00:11:34.560 "nvme_io_md": false, 00:11:34.560 "write_zeroes": true, 00:11:34.560 "zcopy": true, 00:11:34.560 "get_zone_info": false, 00:11:34.560 "zone_management": false, 00:11:34.560 "zone_append": false, 00:11:34.560 "compare": false, 00:11:34.560 "compare_and_write": false, 00:11:34.560 "abort": true, 00:11:34.560 "seek_hole": false, 00:11:34.560 "seek_data": false, 00:11:34.560 "copy": true, 00:11:34.560 "nvme_iov_md": false 00:11:34.560 }, 00:11:34.560 "memory_domains": [ 00:11:34.560 { 00:11:34.560 "dma_device_id": "system", 00:11:34.560 "dma_device_type": 1 00:11:34.560 }, 00:11:34.560 { 00:11:34.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.560 "dma_device_type": 2 00:11:34.560 } 00:11:34.560 ], 00:11:34.560 "driver_specific": {} 00:11:34.560 } 00:11:34.560 ] 00:11:34.852 17:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:34.852 17:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:34.852 17:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:34.852 17:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:34.852 BaseBdev3 00:11:34.852 17:30:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:11:34.852 17:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:11:34.852 17:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:34.852 17:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:34.852 17:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:34.852 17:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:34.852 17:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:35.147 17:30:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:35.405 [ 00:11:35.405 { 00:11:35.405 "name": "BaseBdev3", 00:11:35.405 "aliases": [ 00:11:35.405 "eeedfd27-42cf-11ef-96ac-773515fba644" 00:11:35.405 ], 00:11:35.405 "product_name": "Malloc disk", 00:11:35.405 "block_size": 512, 00:11:35.405 "num_blocks": 65536, 00:11:35.405 "uuid": "eeedfd27-42cf-11ef-96ac-773515fba644", 00:11:35.405 "assigned_rate_limits": { 00:11:35.405 "rw_ios_per_sec": 0, 00:11:35.405 "rw_mbytes_per_sec": 0, 00:11:35.405 "r_mbytes_per_sec": 0, 00:11:35.405 "w_mbytes_per_sec": 0 00:11:35.405 }, 00:11:35.405 "claimed": false, 00:11:35.405 "zoned": false, 00:11:35.405 "supported_io_types": { 00:11:35.405 "read": true, 00:11:35.405 "write": true, 00:11:35.405 "unmap": true, 00:11:35.405 "flush": true, 00:11:35.405 "reset": true, 00:11:35.405 "nvme_admin": false, 00:11:35.405 "nvme_io": false, 00:11:35.405 "nvme_io_md": false, 00:11:35.405 "write_zeroes": true, 00:11:35.405 "zcopy": true, 00:11:35.405 "get_zone_info": false, 00:11:35.405 "zone_management": false, 00:11:35.405 "zone_append": false, 00:11:35.405 "compare": false, 00:11:35.405 "compare_and_write": false, 00:11:35.405 "abort": true, 00:11:35.405 "seek_hole": false, 00:11:35.405 "seek_data": false, 00:11:35.405 "copy": true, 00:11:35.405 "nvme_iov_md": false 00:11:35.405 }, 00:11:35.405 "memory_domains": [ 00:11:35.405 { 00:11:35.405 "dma_device_id": "system", 00:11:35.405 "dma_device_type": 1 00:11:35.405 }, 00:11:35.405 { 00:11:35.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.405 "dma_device_type": 2 00:11:35.405 } 00:11:35.405 ], 00:11:35.405 "driver_specific": {} 00:11:35.405 } 00:11:35.405 ] 00:11:35.405 17:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:35.405 17:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:35.405 17:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:35.405 17:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:35.663 [2024-07-15 17:30:31.381270] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:35.663 [2024-07-15 17:30:31.381321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:35.663 [2024-07-15 17:30:31.381330] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:35.663 [2024-07-15 17:30:31.381929] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:35.663 17:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:35.663 17:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:35.663 17:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:35.663 17:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:35.663 17:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:35.663 17:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:35.663 17:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:35.663 17:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:35.663 17:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:35.663 17:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:35.663 17:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:35.663 17:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.921 17:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:35.921 "name": "Existed_Raid", 00:11:35.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.921 "strip_size_kb": 0, 00:11:35.921 "state": "configuring", 00:11:35.921 "raid_level": "raid1", 00:11:35.921 "superblock": false, 00:11:35.921 "num_base_bdevs": 3, 00:11:35.921 "num_base_bdevs_discovered": 2, 00:11:35.921 "num_base_bdevs_operational": 3, 00:11:35.921 "base_bdevs_list": [ 00:11:35.921 { 00:11:35.921 "name": "BaseBdev1", 00:11:35.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.921 "is_configured": false, 00:11:35.921 "data_offset": 0, 00:11:35.921 "data_size": 0 00:11:35.921 }, 00:11:35.921 { 00:11:35.921 "name": "BaseBdev2", 00:11:35.921 "uuid": "ee7e4a0a-42cf-11ef-96ac-773515fba644", 00:11:35.921 "is_configured": true, 00:11:35.921 "data_offset": 0, 00:11:35.921 "data_size": 65536 00:11:35.921 }, 00:11:35.921 { 00:11:35.921 "name": "BaseBdev3", 00:11:35.921 "uuid": "eeedfd27-42cf-11ef-96ac-773515fba644", 00:11:35.921 "is_configured": true, 00:11:35.921 "data_offset": 0, 00:11:35.921 "data_size": 65536 00:11:35.921 } 00:11:35.921 ] 00:11:35.921 }' 00:11:35.921 17:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:35.921 17:30:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.179 17:30:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:11:36.437 [2024-07-15 17:30:32.201334] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:36.437 17:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:36.437 17:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:36.437 17:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:36.437 17:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:36.437 17:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:36.437 17:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:36.437 17:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:36.437 17:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:36.437 17:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:36.437 17:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:36.437 17:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:36.437 17:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:36.696 17:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:36.696 "name": "Existed_Raid", 00:11:36.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.696 "strip_size_kb": 0, 00:11:36.696 "state": "configuring", 00:11:36.696 "raid_level": "raid1", 00:11:36.696 "superblock": false, 00:11:36.696 "num_base_bdevs": 3, 00:11:36.696 "num_base_bdevs_discovered": 1, 00:11:36.696 "num_base_bdevs_operational": 3, 00:11:36.696 "base_bdevs_list": [ 00:11:36.696 { 00:11:36.696 "name": "BaseBdev1", 00:11:36.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.696 "is_configured": false, 00:11:36.696 "data_offset": 0, 00:11:36.696 "data_size": 0 00:11:36.696 }, 00:11:36.696 { 00:11:36.696 "name": null, 00:11:36.696 "uuid": "ee7e4a0a-42cf-11ef-96ac-773515fba644", 00:11:36.696 "is_configured": false, 00:11:36.696 "data_offset": 0, 00:11:36.696 "data_size": 65536 00:11:36.696 }, 00:11:36.696 { 00:11:36.696 "name": "BaseBdev3", 00:11:36.696 "uuid": "eeedfd27-42cf-11ef-96ac-773515fba644", 00:11:36.696 "is_configured": true, 00:11:36.696 "data_offset": 0, 00:11:36.696 "data_size": 65536 00:11:36.696 } 00:11:36.696 ] 00:11:36.696 }' 00:11:36.696 17:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:36.696 17:30:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.263 17:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:37.263 17:30:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:37.521 17:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:11:37.521 17:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:37.778 [2024-07-15 17:30:33.377560] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:37.778 BaseBdev1 00:11:37.778 17:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:11:37.778 17:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:11:37.778 17:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:37.778 17:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:37.778 17:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:37.778 17:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:37.778 17:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:38.036 17:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:38.294 [ 00:11:38.294 { 00:11:38.294 "name": "BaseBdev1", 00:11:38.294 "aliases": [ 00:11:38.294 "f09599e4-42cf-11ef-96ac-773515fba644" 00:11:38.294 ], 00:11:38.294 "product_name": "Malloc disk", 00:11:38.294 "block_size": 512, 00:11:38.294 "num_blocks": 65536, 00:11:38.294 "uuid": "f09599e4-42cf-11ef-96ac-773515fba644", 00:11:38.294 "assigned_rate_limits": { 00:11:38.294 "rw_ios_per_sec": 0, 00:11:38.294 "rw_mbytes_per_sec": 0, 00:11:38.294 "r_mbytes_per_sec": 0, 00:11:38.294 "w_mbytes_per_sec": 0 00:11:38.294 }, 00:11:38.294 "claimed": true, 00:11:38.294 "claim_type": "exclusive_write", 00:11:38.294 "zoned": false, 00:11:38.294 "supported_io_types": { 00:11:38.294 "read": true, 00:11:38.295 "write": true, 00:11:38.295 "unmap": true, 00:11:38.295 "flush": true, 00:11:38.295 "reset": true, 00:11:38.295 "nvme_admin": false, 00:11:38.295 "nvme_io": false, 00:11:38.295 "nvme_io_md": false, 00:11:38.295 "write_zeroes": true, 00:11:38.295 "zcopy": true, 00:11:38.295 "get_zone_info": false, 00:11:38.295 "zone_management": false, 00:11:38.295 "zone_append": false, 00:11:38.295 "compare": false, 00:11:38.295 "compare_and_write": false, 00:11:38.295 "abort": true, 00:11:38.295 "seek_hole": false, 00:11:38.295 "seek_data": false, 00:11:38.295 "copy": true, 00:11:38.295 "nvme_iov_md": false 00:11:38.295 }, 00:11:38.295 "memory_domains": [ 00:11:38.295 { 00:11:38.295 "dma_device_id": "system", 00:11:38.295 "dma_device_type": 1 00:11:38.295 }, 00:11:38.295 { 00:11:38.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.295 "dma_device_type": 2 00:11:38.295 } 00:11:38.295 ], 00:11:38.295 "driver_specific": {} 00:11:38.295 } 00:11:38.295 ] 00:11:38.295 17:30:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:38.295 17:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:38.295 17:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:38.295 17:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:38.295 17:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:38.295 17:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:38.295 17:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:38.295 17:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:38.295 17:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:38.295 17:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:38.295 17:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:38.295 17:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:38.295 17:30:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.553 17:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:38.553 "name": "Existed_Raid", 00:11:38.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.553 "strip_size_kb": 0, 00:11:38.553 "state": "configuring", 00:11:38.553 "raid_level": "raid1", 00:11:38.553 "superblock": false, 00:11:38.553 "num_base_bdevs": 3, 00:11:38.553 "num_base_bdevs_discovered": 2, 00:11:38.553 "num_base_bdevs_operational": 3, 00:11:38.553 "base_bdevs_list": [ 00:11:38.553 { 00:11:38.553 "name": "BaseBdev1", 00:11:38.553 "uuid": "f09599e4-42cf-11ef-96ac-773515fba644", 00:11:38.553 "is_configured": true, 00:11:38.553 "data_offset": 0, 00:11:38.553 "data_size": 65536 00:11:38.553 }, 00:11:38.553 { 00:11:38.553 "name": null, 00:11:38.553 "uuid": "ee7e4a0a-42cf-11ef-96ac-773515fba644", 00:11:38.553 "is_configured": false, 00:11:38.553 "data_offset": 0, 00:11:38.553 "data_size": 65536 00:11:38.553 }, 00:11:38.553 { 00:11:38.553 "name": "BaseBdev3", 00:11:38.553 "uuid": "eeedfd27-42cf-11ef-96ac-773515fba644", 00:11:38.553 "is_configured": true, 00:11:38.553 "data_offset": 0, 00:11:38.553 "data_size": 65536 00:11:38.553 } 00:11:38.553 ] 00:11:38.553 }' 00:11:38.553 17:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:38.553 17:30:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.812 17:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:38.812 17:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:39.071 17:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:11:39.071 17:30:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:11:39.329 [2024-07-15 17:30:35.017475] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:39.329 17:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:39.329 17:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:39.329 17:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:39.329 17:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:39.329 17:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:39.329 17:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:39.329 17:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:39.329 17:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:39.329 17:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:39.329 17:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:39.329 17:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:39.329 17:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.587 17:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:39.587 "name": "Existed_Raid", 00:11:39.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:39.587 "strip_size_kb": 0, 00:11:39.587 "state": "configuring", 00:11:39.587 "raid_level": "raid1", 00:11:39.587 "superblock": false, 00:11:39.587 "num_base_bdevs": 3, 00:11:39.587 "num_base_bdevs_discovered": 1, 00:11:39.587 "num_base_bdevs_operational": 3, 00:11:39.587 "base_bdevs_list": [ 00:11:39.587 { 00:11:39.587 "name": "BaseBdev1", 00:11:39.587 "uuid": "f09599e4-42cf-11ef-96ac-773515fba644", 00:11:39.587 "is_configured": true, 00:11:39.587 "data_offset": 0, 00:11:39.587 "data_size": 65536 00:11:39.587 }, 00:11:39.587 { 00:11:39.587 "name": null, 00:11:39.587 "uuid": "ee7e4a0a-42cf-11ef-96ac-773515fba644", 00:11:39.587 "is_configured": false, 00:11:39.587 "data_offset": 0, 00:11:39.587 "data_size": 65536 00:11:39.587 }, 00:11:39.587 { 00:11:39.587 "name": null, 00:11:39.587 "uuid": "eeedfd27-42cf-11ef-96ac-773515fba644", 00:11:39.587 "is_configured": false, 00:11:39.587 "data_offset": 0, 00:11:39.587 "data_size": 65536 00:11:39.587 } 00:11:39.587 ] 00:11:39.587 }' 00:11:39.587 17:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:39.587 17:30:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.845 17:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:39.845 17:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:40.101 17:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:11:40.102 17:30:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:40.359 [2024-07-15 17:30:36.189532] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:40.617 17:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:40.617 17:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:40.617 17:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:40.617 17:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:40.617 17:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:40.617 17:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:40.617 17:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:40.617 17:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:40.617 17:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:40.617 17:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:40.617 17:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:40.617 17:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.875 17:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:40.875 "name": "Existed_Raid", 00:11:40.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.875 "strip_size_kb": 0, 00:11:40.875 "state": "configuring", 00:11:40.875 "raid_level": "raid1", 00:11:40.875 "superblock": false, 00:11:40.875 "num_base_bdevs": 3, 00:11:40.875 "num_base_bdevs_discovered": 2, 00:11:40.875 "num_base_bdevs_operational": 3, 00:11:40.875 "base_bdevs_list": [ 00:11:40.875 { 00:11:40.875 "name": "BaseBdev1", 00:11:40.875 "uuid": "f09599e4-42cf-11ef-96ac-773515fba644", 00:11:40.875 "is_configured": true, 00:11:40.875 "data_offset": 0, 00:11:40.875 "data_size": 65536 00:11:40.875 }, 00:11:40.875 { 00:11:40.875 "name": null, 00:11:40.875 "uuid": "ee7e4a0a-42cf-11ef-96ac-773515fba644", 00:11:40.875 "is_configured": false, 00:11:40.875 "data_offset": 0, 00:11:40.875 "data_size": 65536 00:11:40.875 }, 00:11:40.875 { 00:11:40.875 "name": "BaseBdev3", 00:11:40.875 "uuid": "eeedfd27-42cf-11ef-96ac-773515fba644", 00:11:40.875 "is_configured": true, 00:11:40.875 "data_offset": 0, 00:11:40.875 "data_size": 65536 00:11:40.875 } 00:11:40.875 ] 00:11:40.875 }' 00:11:40.875 17:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:40.875 17:30:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.133 17:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:41.133 17:30:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:41.391 17:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:11:41.391 17:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:41.649 [2024-07-15 17:30:37.333682] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:41.649 17:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:41.649 17:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:41.649 17:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:41.649 17:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:41.649 17:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:41.649 17:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:41.649 17:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:41.649 17:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:41.649 17:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:41.649 17:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:41.649 17:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:41.649 17:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.910 17:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:41.910 "name": "Existed_Raid", 00:11:41.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.910 "strip_size_kb": 0, 00:11:41.910 "state": "configuring", 00:11:41.910 "raid_level": "raid1", 00:11:41.910 "superblock": false, 00:11:41.910 "num_base_bdevs": 3, 00:11:41.910 "num_base_bdevs_discovered": 1, 00:11:41.910 "num_base_bdevs_operational": 3, 00:11:41.910 "base_bdevs_list": [ 00:11:41.910 { 00:11:41.910 "name": null, 00:11:41.910 "uuid": "f09599e4-42cf-11ef-96ac-773515fba644", 00:11:41.910 "is_configured": false, 00:11:41.910 "data_offset": 0, 00:11:41.910 "data_size": 65536 00:11:41.910 }, 00:11:41.910 { 00:11:41.910 "name": null, 00:11:41.910 "uuid": "ee7e4a0a-42cf-11ef-96ac-773515fba644", 00:11:41.910 "is_configured": false, 00:11:41.910 "data_offset": 0, 00:11:41.910 "data_size": 65536 00:11:41.910 }, 00:11:41.910 { 00:11:41.910 "name": "BaseBdev3", 00:11:41.910 "uuid": "eeedfd27-42cf-11ef-96ac-773515fba644", 00:11:41.910 "is_configured": true, 00:11:41.910 "data_offset": 0, 00:11:41.910 "data_size": 65536 00:11:41.910 } 00:11:41.910 ] 00:11:41.910 }' 00:11:41.910 17:30:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:41.910 17:30:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.477 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:42.477 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:42.477 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:11:42.477 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:43.044 [2024-07-15 17:30:38.603987] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:43.044 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:43.044 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:43.044 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:43.044 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:43.044 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:43.044 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:43.044 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:43.044 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:43.044 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:43.044 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:43.044 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:43.044 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.302 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:43.302 "name": "Existed_Raid", 00:11:43.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.302 "strip_size_kb": 0, 00:11:43.302 "state": "configuring", 00:11:43.302 "raid_level": "raid1", 00:11:43.302 "superblock": false, 00:11:43.302 "num_base_bdevs": 3, 00:11:43.302 "num_base_bdevs_discovered": 2, 00:11:43.302 "num_base_bdevs_operational": 3, 00:11:43.302 "base_bdevs_list": [ 00:11:43.302 { 00:11:43.302 "name": null, 00:11:43.302 "uuid": "f09599e4-42cf-11ef-96ac-773515fba644", 00:11:43.302 "is_configured": false, 00:11:43.302 "data_offset": 0, 00:11:43.302 "data_size": 65536 00:11:43.302 }, 00:11:43.302 { 00:11:43.302 "name": "BaseBdev2", 00:11:43.302 "uuid": "ee7e4a0a-42cf-11ef-96ac-773515fba644", 00:11:43.302 "is_configured": true, 00:11:43.302 "data_offset": 0, 00:11:43.302 "data_size": 65536 00:11:43.302 }, 00:11:43.302 { 00:11:43.302 "name": "BaseBdev3", 00:11:43.302 "uuid": "eeedfd27-42cf-11ef-96ac-773515fba644", 00:11:43.302 "is_configured": true, 00:11:43.302 "data_offset": 0, 00:11:43.302 "data_size": 65536 00:11:43.302 } 00:11:43.302 ] 00:11:43.302 }' 00:11:43.302 17:30:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:43.302 17:30:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.561 17:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:43.561 17:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:43.819 17:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:11:43.819 17:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:43.819 17:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:44.077 17:30:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u f09599e4-42cf-11ef-96ac-773515fba644 00:11:44.644 [2024-07-15 17:30:40.176197] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:44.644 [2024-07-15 17:30:40.176239] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x8585034f00 00:11:44.644 [2024-07-15 17:30:40.176244] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:44.644 [2024-07-15 17:30:40.176266] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x8585097e20 00:11:44.644 [2024-07-15 17:30:40.176341] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x8585034f00 00:11:44.644 [2024-07-15 17:30:40.176346] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x8585034f00 00:11:44.644 [2024-07-15 17:30:40.176379] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.644 NewBaseBdev 00:11:44.644 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:11:44.644 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:11:44.644 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:44.644 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:44.644 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:44.644 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:44.644 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:44.902 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:45.160 [ 00:11:45.160 { 00:11:45.160 "name": "NewBaseBdev", 00:11:45.161 "aliases": [ 00:11:45.161 "f09599e4-42cf-11ef-96ac-773515fba644" 00:11:45.161 ], 00:11:45.161 "product_name": "Malloc disk", 00:11:45.161 "block_size": 512, 00:11:45.161 "num_blocks": 65536, 00:11:45.161 "uuid": "f09599e4-42cf-11ef-96ac-773515fba644", 00:11:45.161 "assigned_rate_limits": { 00:11:45.161 "rw_ios_per_sec": 0, 00:11:45.161 "rw_mbytes_per_sec": 0, 00:11:45.161 "r_mbytes_per_sec": 0, 00:11:45.161 "w_mbytes_per_sec": 0 00:11:45.161 }, 00:11:45.161 "claimed": true, 00:11:45.161 "claim_type": "exclusive_write", 00:11:45.161 "zoned": false, 00:11:45.161 "supported_io_types": { 00:11:45.161 "read": true, 00:11:45.161 "write": true, 00:11:45.161 "unmap": true, 00:11:45.161 "flush": true, 00:11:45.161 "reset": true, 00:11:45.161 "nvme_admin": false, 00:11:45.161 "nvme_io": false, 00:11:45.161 "nvme_io_md": false, 00:11:45.161 "write_zeroes": true, 00:11:45.161 "zcopy": true, 00:11:45.161 "get_zone_info": false, 00:11:45.161 "zone_management": false, 00:11:45.161 "zone_append": false, 00:11:45.161 "compare": false, 00:11:45.161 "compare_and_write": false, 00:11:45.161 "abort": true, 00:11:45.161 "seek_hole": false, 00:11:45.161 "seek_data": false, 00:11:45.161 "copy": true, 00:11:45.161 "nvme_iov_md": false 00:11:45.161 }, 00:11:45.161 "memory_domains": [ 00:11:45.161 { 00:11:45.161 "dma_device_id": "system", 00:11:45.161 "dma_device_type": 1 00:11:45.161 }, 00:11:45.161 { 00:11:45.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.161 "dma_device_type": 2 00:11:45.161 } 00:11:45.161 ], 00:11:45.161 "driver_specific": {} 00:11:45.161 } 00:11:45.161 ] 00:11:45.161 17:30:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:45.161 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:45.161 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:45.161 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:45.161 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:45.161 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:45.161 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:45.161 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:45.161 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:45.161 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:45.161 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:45.161 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:45.161 17:30:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.420 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:45.420 "name": "Existed_Raid", 00:11:45.420 "uuid": "f4a304c4-42cf-11ef-96ac-773515fba644", 00:11:45.420 "strip_size_kb": 0, 00:11:45.420 "state": "online", 00:11:45.420 "raid_level": "raid1", 00:11:45.420 "superblock": false, 00:11:45.420 "num_base_bdevs": 3, 00:11:45.420 "num_base_bdevs_discovered": 3, 00:11:45.420 "num_base_bdevs_operational": 3, 00:11:45.420 "base_bdevs_list": [ 00:11:45.420 { 00:11:45.420 "name": "NewBaseBdev", 00:11:45.420 "uuid": "f09599e4-42cf-11ef-96ac-773515fba644", 00:11:45.420 "is_configured": true, 00:11:45.420 "data_offset": 0, 00:11:45.420 "data_size": 65536 00:11:45.420 }, 00:11:45.420 { 00:11:45.420 "name": "BaseBdev2", 00:11:45.420 "uuid": "ee7e4a0a-42cf-11ef-96ac-773515fba644", 00:11:45.420 "is_configured": true, 00:11:45.420 "data_offset": 0, 00:11:45.420 "data_size": 65536 00:11:45.420 }, 00:11:45.420 { 00:11:45.420 "name": "BaseBdev3", 00:11:45.420 "uuid": "eeedfd27-42cf-11ef-96ac-773515fba644", 00:11:45.420 "is_configured": true, 00:11:45.420 "data_offset": 0, 00:11:45.420 "data_size": 65536 00:11:45.420 } 00:11:45.420 ] 00:11:45.420 }' 00:11:45.420 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:45.420 17:30:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.678 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:11:45.678 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:45.678 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:45.678 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:45.678 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:45.678 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:45.678 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:45.678 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:45.937 [2024-07-15 17:30:41.672100] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:45.937 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:45.937 "name": "Existed_Raid", 00:11:45.937 "aliases": [ 00:11:45.937 "f4a304c4-42cf-11ef-96ac-773515fba644" 00:11:45.937 ], 00:11:45.937 "product_name": "Raid Volume", 00:11:45.937 "block_size": 512, 00:11:45.937 "num_blocks": 65536, 00:11:45.937 "uuid": "f4a304c4-42cf-11ef-96ac-773515fba644", 00:11:45.937 "assigned_rate_limits": { 00:11:45.937 "rw_ios_per_sec": 0, 00:11:45.937 "rw_mbytes_per_sec": 0, 00:11:45.937 "r_mbytes_per_sec": 0, 00:11:45.937 "w_mbytes_per_sec": 0 00:11:45.937 }, 00:11:45.938 "claimed": false, 00:11:45.938 "zoned": false, 00:11:45.938 "supported_io_types": { 00:11:45.938 "read": true, 00:11:45.938 "write": true, 00:11:45.938 "unmap": false, 00:11:45.938 "flush": false, 00:11:45.938 "reset": true, 00:11:45.938 "nvme_admin": false, 00:11:45.938 "nvme_io": false, 00:11:45.938 "nvme_io_md": false, 00:11:45.938 "write_zeroes": true, 00:11:45.938 "zcopy": false, 00:11:45.938 "get_zone_info": false, 00:11:45.938 "zone_management": false, 00:11:45.938 "zone_append": false, 00:11:45.938 "compare": false, 00:11:45.938 "compare_and_write": false, 00:11:45.938 "abort": false, 00:11:45.938 "seek_hole": false, 00:11:45.938 "seek_data": false, 00:11:45.938 "copy": false, 00:11:45.938 "nvme_iov_md": false 00:11:45.938 }, 00:11:45.938 "memory_domains": [ 00:11:45.938 { 00:11:45.938 "dma_device_id": "system", 00:11:45.938 "dma_device_type": 1 00:11:45.938 }, 00:11:45.938 { 00:11:45.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.938 "dma_device_type": 2 00:11:45.938 }, 00:11:45.938 { 00:11:45.938 "dma_device_id": "system", 00:11:45.938 "dma_device_type": 1 00:11:45.938 }, 00:11:45.938 { 00:11:45.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.938 "dma_device_type": 2 00:11:45.938 }, 00:11:45.938 { 00:11:45.938 "dma_device_id": "system", 00:11:45.938 "dma_device_type": 1 00:11:45.938 }, 00:11:45.938 { 00:11:45.938 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.938 "dma_device_type": 2 00:11:45.938 } 00:11:45.938 ], 00:11:45.938 "driver_specific": { 00:11:45.938 "raid": { 00:11:45.938 "uuid": "f4a304c4-42cf-11ef-96ac-773515fba644", 00:11:45.938 "strip_size_kb": 0, 00:11:45.938 "state": "online", 00:11:45.938 "raid_level": "raid1", 00:11:45.938 "superblock": false, 00:11:45.938 "num_base_bdevs": 3, 00:11:45.938 "num_base_bdevs_discovered": 3, 00:11:45.938 "num_base_bdevs_operational": 3, 00:11:45.938 "base_bdevs_list": [ 00:11:45.938 { 00:11:45.938 "name": "NewBaseBdev", 00:11:45.938 "uuid": "f09599e4-42cf-11ef-96ac-773515fba644", 00:11:45.938 "is_configured": true, 00:11:45.938 "data_offset": 0, 00:11:45.938 "data_size": 65536 00:11:45.938 }, 00:11:45.938 { 00:11:45.938 "name": "BaseBdev2", 00:11:45.938 "uuid": "ee7e4a0a-42cf-11ef-96ac-773515fba644", 00:11:45.938 "is_configured": true, 00:11:45.938 "data_offset": 0, 00:11:45.938 "data_size": 65536 00:11:45.938 }, 00:11:45.938 { 00:11:45.938 "name": "BaseBdev3", 00:11:45.938 "uuid": "eeedfd27-42cf-11ef-96ac-773515fba644", 00:11:45.938 "is_configured": true, 00:11:45.938 "data_offset": 0, 00:11:45.938 "data_size": 65536 00:11:45.938 } 00:11:45.938 ] 00:11:45.938 } 00:11:45.938 } 00:11:45.938 }' 00:11:45.938 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:45.938 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:11:45.938 BaseBdev2 00:11:45.938 BaseBdev3' 00:11:45.938 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:45.938 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:11:45.938 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:46.196 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:46.196 "name": "NewBaseBdev", 00:11:46.196 "aliases": [ 00:11:46.196 "f09599e4-42cf-11ef-96ac-773515fba644" 00:11:46.196 ], 00:11:46.196 "product_name": "Malloc disk", 00:11:46.196 "block_size": 512, 00:11:46.196 "num_blocks": 65536, 00:11:46.196 "uuid": "f09599e4-42cf-11ef-96ac-773515fba644", 00:11:46.196 "assigned_rate_limits": { 00:11:46.196 "rw_ios_per_sec": 0, 00:11:46.196 "rw_mbytes_per_sec": 0, 00:11:46.196 "r_mbytes_per_sec": 0, 00:11:46.196 "w_mbytes_per_sec": 0 00:11:46.196 }, 00:11:46.196 "claimed": true, 00:11:46.196 "claim_type": "exclusive_write", 00:11:46.196 "zoned": false, 00:11:46.196 "supported_io_types": { 00:11:46.196 "read": true, 00:11:46.196 "write": true, 00:11:46.196 "unmap": true, 00:11:46.196 "flush": true, 00:11:46.196 "reset": true, 00:11:46.196 "nvme_admin": false, 00:11:46.197 "nvme_io": false, 00:11:46.197 "nvme_io_md": false, 00:11:46.197 "write_zeroes": true, 00:11:46.197 "zcopy": true, 00:11:46.197 "get_zone_info": false, 00:11:46.197 "zone_management": false, 00:11:46.197 "zone_append": false, 00:11:46.197 "compare": false, 00:11:46.197 "compare_and_write": false, 00:11:46.197 "abort": true, 00:11:46.197 "seek_hole": false, 00:11:46.197 "seek_data": false, 00:11:46.197 "copy": true, 00:11:46.197 "nvme_iov_md": false 00:11:46.197 }, 00:11:46.197 "memory_domains": [ 00:11:46.197 { 00:11:46.197 "dma_device_id": "system", 00:11:46.197 "dma_device_type": 1 00:11:46.197 }, 00:11:46.197 { 00:11:46.197 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.197 "dma_device_type": 2 00:11:46.197 } 00:11:46.197 ], 00:11:46.197 "driver_specific": {} 00:11:46.197 }' 00:11:46.197 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:46.197 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:46.197 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:46.197 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:46.197 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:46.197 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:46.197 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:46.197 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:46.197 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:46.197 17:30:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:46.197 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:46.197 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:46.197 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:46.197 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:46.197 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:46.471 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:46.471 "name": "BaseBdev2", 00:11:46.471 "aliases": [ 00:11:46.471 "ee7e4a0a-42cf-11ef-96ac-773515fba644" 00:11:46.471 ], 00:11:46.471 "product_name": "Malloc disk", 00:11:46.471 "block_size": 512, 00:11:46.471 "num_blocks": 65536, 00:11:46.471 "uuid": "ee7e4a0a-42cf-11ef-96ac-773515fba644", 00:11:46.471 "assigned_rate_limits": { 00:11:46.471 "rw_ios_per_sec": 0, 00:11:46.471 "rw_mbytes_per_sec": 0, 00:11:46.471 "r_mbytes_per_sec": 0, 00:11:46.471 "w_mbytes_per_sec": 0 00:11:46.471 }, 00:11:46.471 "claimed": true, 00:11:46.471 "claim_type": "exclusive_write", 00:11:46.471 "zoned": false, 00:11:46.471 "supported_io_types": { 00:11:46.471 "read": true, 00:11:46.471 "write": true, 00:11:46.471 "unmap": true, 00:11:46.471 "flush": true, 00:11:46.471 "reset": true, 00:11:46.471 "nvme_admin": false, 00:11:46.471 "nvme_io": false, 00:11:46.471 "nvme_io_md": false, 00:11:46.471 "write_zeroes": true, 00:11:46.471 "zcopy": true, 00:11:46.471 "get_zone_info": false, 00:11:46.471 "zone_management": false, 00:11:46.471 "zone_append": false, 00:11:46.471 "compare": false, 00:11:46.471 "compare_and_write": false, 00:11:46.471 "abort": true, 00:11:46.471 "seek_hole": false, 00:11:46.471 "seek_data": false, 00:11:46.471 "copy": true, 00:11:46.471 "nvme_iov_md": false 00:11:46.471 }, 00:11:46.471 "memory_domains": [ 00:11:46.471 { 00:11:46.471 "dma_device_id": "system", 00:11:46.471 "dma_device_type": 1 00:11:46.471 }, 00:11:46.471 { 00:11:46.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.471 "dma_device_type": 2 00:11:46.471 } 00:11:46.471 ], 00:11:46.471 "driver_specific": {} 00:11:46.471 }' 00:11:46.471 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:46.471 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:46.471 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:46.471 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:46.471 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:46.471 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:46.471 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:46.471 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:46.729 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:46.729 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:46.729 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:46.729 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:46.729 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:46.729 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:46.729 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:46.988 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:46.988 "name": "BaseBdev3", 00:11:46.988 "aliases": [ 00:11:46.988 "eeedfd27-42cf-11ef-96ac-773515fba644" 00:11:46.988 ], 00:11:46.988 "product_name": "Malloc disk", 00:11:46.988 "block_size": 512, 00:11:46.988 "num_blocks": 65536, 00:11:46.988 "uuid": "eeedfd27-42cf-11ef-96ac-773515fba644", 00:11:46.988 "assigned_rate_limits": { 00:11:46.988 "rw_ios_per_sec": 0, 00:11:46.988 "rw_mbytes_per_sec": 0, 00:11:46.988 "r_mbytes_per_sec": 0, 00:11:46.988 "w_mbytes_per_sec": 0 00:11:46.988 }, 00:11:46.988 "claimed": true, 00:11:46.988 "claim_type": "exclusive_write", 00:11:46.988 "zoned": false, 00:11:46.988 "supported_io_types": { 00:11:46.988 "read": true, 00:11:46.988 "write": true, 00:11:46.988 "unmap": true, 00:11:46.988 "flush": true, 00:11:46.988 "reset": true, 00:11:46.988 "nvme_admin": false, 00:11:46.988 "nvme_io": false, 00:11:46.988 "nvme_io_md": false, 00:11:46.988 "write_zeroes": true, 00:11:46.988 "zcopy": true, 00:11:46.988 "get_zone_info": false, 00:11:46.988 "zone_management": false, 00:11:46.988 "zone_append": false, 00:11:46.988 "compare": false, 00:11:46.988 "compare_and_write": false, 00:11:46.988 "abort": true, 00:11:46.988 "seek_hole": false, 00:11:46.988 "seek_data": false, 00:11:46.988 "copy": true, 00:11:46.988 "nvme_iov_md": false 00:11:46.988 }, 00:11:46.988 "memory_domains": [ 00:11:46.988 { 00:11:46.988 "dma_device_id": "system", 00:11:46.988 "dma_device_type": 1 00:11:46.988 }, 00:11:46.988 { 00:11:46.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.988 "dma_device_type": 2 00:11:46.988 } 00:11:46.988 ], 00:11:46.988 "driver_specific": {} 00:11:46.988 }' 00:11:46.988 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:46.988 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:46.988 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:46.988 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:46.988 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:46.988 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:46.988 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:46.988 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:46.988 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:46.988 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:46.988 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:46.988 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:46.988 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:47.247 [2024-07-15 17:30:42.944118] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:47.247 [2024-07-15 17:30:42.944144] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:47.247 [2024-07-15 17:30:42.944168] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:47.247 [2024-07-15 17:30:42.944234] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:47.247 [2024-07-15 17:30:42.944240] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x8585034f00 name Existed_Raid, state offline 00:11:47.247 17:30:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 56092 00:11:47.247 17:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 56092 ']' 00:11:47.247 17:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 56092 00:11:47.247 17:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:11:47.247 17:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:11:47.247 17:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 56092 00:11:47.247 17:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:11:47.247 17:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:11:47.247 killing process with pid 56092 00:11:47.247 17:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:11:47.247 17:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 56092' 00:11:47.247 17:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 56092 00:11:47.247 [2024-07-15 17:30:42.972430] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:47.247 17:30:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 56092 00:11:47.247 [2024-07-15 17:30:42.990522] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:47.506 17:30:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:11:47.507 00:11:47.507 real 0m24.842s 00:11:47.507 user 0m45.582s 00:11:47.507 sys 0m3.298s 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.507 ************************************ 00:11:47.507 END TEST raid_state_function_test 00:11:47.507 ************************************ 00:11:47.507 17:30:43 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:47.507 17:30:43 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:11:47.507 17:30:43 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:47.507 17:30:43 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:47.507 17:30:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:47.507 ************************************ 00:11:47.507 START TEST raid_state_function_test_sb 00:11:47.507 ************************************ 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 true 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=56821 00:11:47.507 Process raid pid: 56821 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 56821' 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 56821 /var/tmp/spdk-raid.sock 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 56821 ']' 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:47.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:47.507 17:30:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:47.507 [2024-07-15 17:30:43.244277] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:11:47.507 [2024-07-15 17:30:43.244572] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:11:48.073 EAL: TSC is not safe to use in SMP mode 00:11:48.073 EAL: TSC is not invariant 00:11:48.073 [2024-07-15 17:30:43.781989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:48.073 [2024-07-15 17:30:43.870375] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:48.073 [2024-07-15 17:30:43.872474] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.073 [2024-07-15 17:30:43.873250] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:48.073 [2024-07-15 17:30:43.873263] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:48.638 17:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:48.638 17:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:11:48.638 17:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:48.638 [2024-07-15 17:30:44.434262] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:48.638 [2024-07-15 17:30:44.434320] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:48.638 [2024-07-15 17:30:44.434326] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:48.638 [2024-07-15 17:30:44.434335] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:48.638 [2024-07-15 17:30:44.434339] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:48.638 [2024-07-15 17:30:44.434346] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:48.638 17:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:48.638 17:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:48.638 17:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:48.638 17:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:48.638 17:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:48.638 17:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:48.638 17:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:48.638 17:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:48.638 17:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:48.638 17:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:48.638 17:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.638 17:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:48.896 17:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:48.896 "name": "Existed_Raid", 00:11:48.896 "uuid": "f72cbd18-42cf-11ef-96ac-773515fba644", 00:11:48.896 "strip_size_kb": 0, 00:11:48.896 "state": "configuring", 00:11:48.896 "raid_level": "raid1", 00:11:48.896 "superblock": true, 00:11:48.896 "num_base_bdevs": 3, 00:11:48.896 "num_base_bdevs_discovered": 0, 00:11:48.896 "num_base_bdevs_operational": 3, 00:11:48.896 "base_bdevs_list": [ 00:11:48.896 { 00:11:48.896 "name": "BaseBdev1", 00:11:48.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.896 "is_configured": false, 00:11:48.896 "data_offset": 0, 00:11:48.896 "data_size": 0 00:11:48.896 }, 00:11:48.896 { 00:11:48.896 "name": "BaseBdev2", 00:11:48.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.896 "is_configured": false, 00:11:48.896 "data_offset": 0, 00:11:48.896 "data_size": 0 00:11:48.896 }, 00:11:48.896 { 00:11:48.896 "name": "BaseBdev3", 00:11:48.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.896 "is_configured": false, 00:11:48.896 "data_offset": 0, 00:11:48.896 "data_size": 0 00:11:48.896 } 00:11:48.896 ] 00:11:48.896 }' 00:11:48.896 17:30:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:48.896 17:30:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:49.462 17:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:49.462 [2024-07-15 17:30:45.262265] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:49.462 [2024-07-15 17:30:45.262295] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1714cd434500 name Existed_Raid, state configuring 00:11:49.462 17:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:49.719 [2024-07-15 17:30:45.538279] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:49.719 [2024-07-15 17:30:45.538341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:49.719 [2024-07-15 17:30:45.538346] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:49.719 [2024-07-15 17:30:45.538355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:49.719 [2024-07-15 17:30:45.538358] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:49.719 [2024-07-15 17:30:45.538366] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:49.976 17:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:49.976 [2024-07-15 17:30:45.779354] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:49.976 BaseBdev1 00:11:49.976 17:30:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:49.976 17:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:11:49.976 17:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:49.976 17:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:49.976 17:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:49.976 17:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:49.976 17:30:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:50.233 17:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:50.491 [ 00:11:50.491 { 00:11:50.491 "name": "BaseBdev1", 00:11:50.491 "aliases": [ 00:11:50.491 "f7f9d2db-42cf-11ef-96ac-773515fba644" 00:11:50.491 ], 00:11:50.491 "product_name": "Malloc disk", 00:11:50.491 "block_size": 512, 00:11:50.491 "num_blocks": 65536, 00:11:50.491 "uuid": "f7f9d2db-42cf-11ef-96ac-773515fba644", 00:11:50.491 "assigned_rate_limits": { 00:11:50.491 "rw_ios_per_sec": 0, 00:11:50.491 "rw_mbytes_per_sec": 0, 00:11:50.491 "r_mbytes_per_sec": 0, 00:11:50.491 "w_mbytes_per_sec": 0 00:11:50.491 }, 00:11:50.491 "claimed": true, 00:11:50.491 "claim_type": "exclusive_write", 00:11:50.491 "zoned": false, 00:11:50.491 "supported_io_types": { 00:11:50.491 "read": true, 00:11:50.491 "write": true, 00:11:50.491 "unmap": true, 00:11:50.491 "flush": true, 00:11:50.491 "reset": true, 00:11:50.491 "nvme_admin": false, 00:11:50.491 "nvme_io": false, 00:11:50.491 "nvme_io_md": false, 00:11:50.491 "write_zeroes": true, 00:11:50.491 "zcopy": true, 00:11:50.491 "get_zone_info": false, 00:11:50.491 "zone_management": false, 00:11:50.491 "zone_append": false, 00:11:50.491 "compare": false, 00:11:50.491 "compare_and_write": false, 00:11:50.491 "abort": true, 00:11:50.491 "seek_hole": false, 00:11:50.491 "seek_data": false, 00:11:50.491 "copy": true, 00:11:50.491 "nvme_iov_md": false 00:11:50.491 }, 00:11:50.491 "memory_domains": [ 00:11:50.491 { 00:11:50.491 "dma_device_id": "system", 00:11:50.491 "dma_device_type": 1 00:11:50.491 }, 00:11:50.491 { 00:11:50.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.491 "dma_device_type": 2 00:11:50.491 } 00:11:50.491 ], 00:11:50.491 "driver_specific": {} 00:11:50.491 } 00:11:50.491 ] 00:11:50.491 17:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:50.491 17:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:50.491 17:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:50.491 17:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:50.491 17:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:50.491 17:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:50.491 17:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:50.491 17:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:50.491 17:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:50.491 17:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:50.491 17:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:50.491 17:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:50.491 17:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.749 17:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:50.749 "name": "Existed_Raid", 00:11:50.749 "uuid": "f7d532ce-42cf-11ef-96ac-773515fba644", 00:11:50.749 "strip_size_kb": 0, 00:11:50.749 "state": "configuring", 00:11:50.749 "raid_level": "raid1", 00:11:50.749 "superblock": true, 00:11:50.749 "num_base_bdevs": 3, 00:11:50.749 "num_base_bdevs_discovered": 1, 00:11:50.749 "num_base_bdevs_operational": 3, 00:11:50.749 "base_bdevs_list": [ 00:11:50.749 { 00:11:50.749 "name": "BaseBdev1", 00:11:50.749 "uuid": "f7f9d2db-42cf-11ef-96ac-773515fba644", 00:11:50.749 "is_configured": true, 00:11:50.749 "data_offset": 2048, 00:11:50.749 "data_size": 63488 00:11:50.749 }, 00:11:50.749 { 00:11:50.749 "name": "BaseBdev2", 00:11:50.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.750 "is_configured": false, 00:11:50.750 "data_offset": 0, 00:11:50.750 "data_size": 0 00:11:50.750 }, 00:11:50.750 { 00:11:50.750 "name": "BaseBdev3", 00:11:50.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.750 "is_configured": false, 00:11:50.750 "data_offset": 0, 00:11:50.750 "data_size": 0 00:11:50.750 } 00:11:50.750 ] 00:11:50.750 }' 00:11:50.750 17:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:50.750 17:30:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:51.007 17:30:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:51.573 [2024-07-15 17:30:47.102307] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:51.573 [2024-07-15 17:30:47.102339] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1714cd434500 name Existed_Raid, state configuring 00:11:51.574 17:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:51.574 [2024-07-15 17:30:47.386334] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:51.574 [2024-07-15 17:30:47.387164] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:51.574 [2024-07-15 17:30:47.387204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:51.574 [2024-07-15 17:30:47.387210] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:51.574 [2024-07-15 17:30:47.387218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:51.831 17:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:51.831 17:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:51.831 17:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:51.831 17:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:51.831 17:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:51.831 17:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:51.831 17:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:51.831 17:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:51.831 17:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:51.831 17:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:51.832 17:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:51.832 17:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:51.832 17:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.832 17:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:52.090 17:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:52.090 "name": "Existed_Raid", 00:11:52.090 "uuid": "f8ef304f-42cf-11ef-96ac-773515fba644", 00:11:52.090 "strip_size_kb": 0, 00:11:52.090 "state": "configuring", 00:11:52.090 "raid_level": "raid1", 00:11:52.090 "superblock": true, 00:11:52.090 "num_base_bdevs": 3, 00:11:52.090 "num_base_bdevs_discovered": 1, 00:11:52.090 "num_base_bdevs_operational": 3, 00:11:52.090 "base_bdevs_list": [ 00:11:52.090 { 00:11:52.090 "name": "BaseBdev1", 00:11:52.090 "uuid": "f7f9d2db-42cf-11ef-96ac-773515fba644", 00:11:52.090 "is_configured": true, 00:11:52.090 "data_offset": 2048, 00:11:52.090 "data_size": 63488 00:11:52.090 }, 00:11:52.090 { 00:11:52.090 "name": "BaseBdev2", 00:11:52.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.090 "is_configured": false, 00:11:52.090 "data_offset": 0, 00:11:52.090 "data_size": 0 00:11:52.090 }, 00:11:52.090 { 00:11:52.090 "name": "BaseBdev3", 00:11:52.090 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.090 "is_configured": false, 00:11:52.090 "data_offset": 0, 00:11:52.090 "data_size": 0 00:11:52.090 } 00:11:52.090 ] 00:11:52.090 }' 00:11:52.090 17:30:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:52.090 17:30:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:52.348 17:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:52.607 [2024-07-15 17:30:48.242486] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:52.607 BaseBdev2 00:11:52.607 17:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:52.607 17:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:11:52.607 17:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:52.607 17:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:52.607 17:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:52.607 17:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:52.607 17:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:52.866 17:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:53.125 [ 00:11:53.125 { 00:11:53.125 "name": "BaseBdev2", 00:11:53.125 "aliases": [ 00:11:53.125 "f971ceb3-42cf-11ef-96ac-773515fba644" 00:11:53.125 ], 00:11:53.125 "product_name": "Malloc disk", 00:11:53.126 "block_size": 512, 00:11:53.126 "num_blocks": 65536, 00:11:53.126 "uuid": "f971ceb3-42cf-11ef-96ac-773515fba644", 00:11:53.126 "assigned_rate_limits": { 00:11:53.126 "rw_ios_per_sec": 0, 00:11:53.126 "rw_mbytes_per_sec": 0, 00:11:53.126 "r_mbytes_per_sec": 0, 00:11:53.126 "w_mbytes_per_sec": 0 00:11:53.126 }, 00:11:53.126 "claimed": true, 00:11:53.126 "claim_type": "exclusive_write", 00:11:53.126 "zoned": false, 00:11:53.126 "supported_io_types": { 00:11:53.126 "read": true, 00:11:53.126 "write": true, 00:11:53.126 "unmap": true, 00:11:53.126 "flush": true, 00:11:53.126 "reset": true, 00:11:53.126 "nvme_admin": false, 00:11:53.126 "nvme_io": false, 00:11:53.126 "nvme_io_md": false, 00:11:53.126 "write_zeroes": true, 00:11:53.126 "zcopy": true, 00:11:53.126 "get_zone_info": false, 00:11:53.126 "zone_management": false, 00:11:53.126 "zone_append": false, 00:11:53.126 "compare": false, 00:11:53.126 "compare_and_write": false, 00:11:53.126 "abort": true, 00:11:53.126 "seek_hole": false, 00:11:53.126 "seek_data": false, 00:11:53.126 "copy": true, 00:11:53.126 "nvme_iov_md": false 00:11:53.126 }, 00:11:53.126 "memory_domains": [ 00:11:53.126 { 00:11:53.126 "dma_device_id": "system", 00:11:53.126 "dma_device_type": 1 00:11:53.126 }, 00:11:53.126 { 00:11:53.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.126 "dma_device_type": 2 00:11:53.126 } 00:11:53.126 ], 00:11:53.126 "driver_specific": {} 00:11:53.126 } 00:11:53.126 ] 00:11:53.126 17:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:53.126 17:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:53.126 17:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:53.126 17:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:53.126 17:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:53.126 17:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:53.126 17:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:53.126 17:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:53.126 17:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:53.126 17:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:53.126 17:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:53.126 17:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:53.126 17:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:53.126 17:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:53.126 17:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.384 17:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:53.384 "name": "Existed_Raid", 00:11:53.384 "uuid": "f8ef304f-42cf-11ef-96ac-773515fba644", 00:11:53.384 "strip_size_kb": 0, 00:11:53.384 "state": "configuring", 00:11:53.384 "raid_level": "raid1", 00:11:53.384 "superblock": true, 00:11:53.384 "num_base_bdevs": 3, 00:11:53.384 "num_base_bdevs_discovered": 2, 00:11:53.384 "num_base_bdevs_operational": 3, 00:11:53.384 "base_bdevs_list": [ 00:11:53.384 { 00:11:53.384 "name": "BaseBdev1", 00:11:53.384 "uuid": "f7f9d2db-42cf-11ef-96ac-773515fba644", 00:11:53.384 "is_configured": true, 00:11:53.384 "data_offset": 2048, 00:11:53.384 "data_size": 63488 00:11:53.384 }, 00:11:53.384 { 00:11:53.384 "name": "BaseBdev2", 00:11:53.384 "uuid": "f971ceb3-42cf-11ef-96ac-773515fba644", 00:11:53.384 "is_configured": true, 00:11:53.384 "data_offset": 2048, 00:11:53.384 "data_size": 63488 00:11:53.384 }, 00:11:53.384 { 00:11:53.384 "name": "BaseBdev3", 00:11:53.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.384 "is_configured": false, 00:11:53.384 "data_offset": 0, 00:11:53.384 "data_size": 0 00:11:53.384 } 00:11:53.384 ] 00:11:53.384 }' 00:11:53.384 17:30:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:53.384 17:30:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.642 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:53.901 [2024-07-15 17:30:49.498568] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:53.901 [2024-07-15 17:30:49.498649] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1714cd434a00 00:11:53.901 [2024-07-15 17:30:49.498656] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:53.901 [2024-07-15 17:30:49.498676] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1714cd497e20 00:11:53.901 [2024-07-15 17:30:49.498749] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1714cd434a00 00:11:53.901 [2024-07-15 17:30:49.498754] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1714cd434a00 00:11:53.901 [2024-07-15 17:30:49.498788] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.901 BaseBdev3 00:11:53.901 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:11:53.901 17:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:11:53.901 17:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:53.901 17:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:53.901 17:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:53.901 17:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:53.901 17:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:54.159 17:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:54.159 [ 00:11:54.159 { 00:11:54.159 "name": "BaseBdev3", 00:11:54.159 "aliases": [ 00:11:54.159 "fa317883-42cf-11ef-96ac-773515fba644" 00:11:54.159 ], 00:11:54.159 "product_name": "Malloc disk", 00:11:54.159 "block_size": 512, 00:11:54.159 "num_blocks": 65536, 00:11:54.159 "uuid": "fa317883-42cf-11ef-96ac-773515fba644", 00:11:54.159 "assigned_rate_limits": { 00:11:54.159 "rw_ios_per_sec": 0, 00:11:54.159 "rw_mbytes_per_sec": 0, 00:11:54.159 "r_mbytes_per_sec": 0, 00:11:54.159 "w_mbytes_per_sec": 0 00:11:54.159 }, 00:11:54.159 "claimed": true, 00:11:54.159 "claim_type": "exclusive_write", 00:11:54.159 "zoned": false, 00:11:54.159 "supported_io_types": { 00:11:54.159 "read": true, 00:11:54.159 "write": true, 00:11:54.159 "unmap": true, 00:11:54.159 "flush": true, 00:11:54.159 "reset": true, 00:11:54.159 "nvme_admin": false, 00:11:54.159 "nvme_io": false, 00:11:54.159 "nvme_io_md": false, 00:11:54.159 "write_zeroes": true, 00:11:54.159 "zcopy": true, 00:11:54.159 "get_zone_info": false, 00:11:54.159 "zone_management": false, 00:11:54.159 "zone_append": false, 00:11:54.159 "compare": false, 00:11:54.159 "compare_and_write": false, 00:11:54.159 "abort": true, 00:11:54.159 "seek_hole": false, 00:11:54.159 "seek_data": false, 00:11:54.159 "copy": true, 00:11:54.159 "nvme_iov_md": false 00:11:54.159 }, 00:11:54.159 "memory_domains": [ 00:11:54.159 { 00:11:54.159 "dma_device_id": "system", 00:11:54.159 "dma_device_type": 1 00:11:54.159 }, 00:11:54.159 { 00:11:54.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.159 "dma_device_type": 2 00:11:54.159 } 00:11:54.159 ], 00:11:54.159 "driver_specific": {} 00:11:54.159 } 00:11:54.159 ] 00:11:54.159 17:30:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:54.159 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:54.159 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:54.159 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:54.159 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:54.159 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:54.159 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:54.159 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:54.159 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:54.159 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:54.159 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:54.159 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:54.159 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:54.159 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:54.159 17:30:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.727 17:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:54.727 "name": "Existed_Raid", 00:11:54.727 "uuid": "f8ef304f-42cf-11ef-96ac-773515fba644", 00:11:54.727 "strip_size_kb": 0, 00:11:54.727 "state": "online", 00:11:54.727 "raid_level": "raid1", 00:11:54.727 "superblock": true, 00:11:54.727 "num_base_bdevs": 3, 00:11:54.727 "num_base_bdevs_discovered": 3, 00:11:54.727 "num_base_bdevs_operational": 3, 00:11:54.727 "base_bdevs_list": [ 00:11:54.727 { 00:11:54.727 "name": "BaseBdev1", 00:11:54.727 "uuid": "f7f9d2db-42cf-11ef-96ac-773515fba644", 00:11:54.727 "is_configured": true, 00:11:54.727 "data_offset": 2048, 00:11:54.727 "data_size": 63488 00:11:54.727 }, 00:11:54.727 { 00:11:54.727 "name": "BaseBdev2", 00:11:54.727 "uuid": "f971ceb3-42cf-11ef-96ac-773515fba644", 00:11:54.727 "is_configured": true, 00:11:54.727 "data_offset": 2048, 00:11:54.727 "data_size": 63488 00:11:54.727 }, 00:11:54.727 { 00:11:54.727 "name": "BaseBdev3", 00:11:54.727 "uuid": "fa317883-42cf-11ef-96ac-773515fba644", 00:11:54.727 "is_configured": true, 00:11:54.727 "data_offset": 2048, 00:11:54.727 "data_size": 63488 00:11:54.727 } 00:11:54.727 ] 00:11:54.727 }' 00:11:54.727 17:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:54.727 17:30:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.985 17:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:54.985 17:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:54.985 17:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:54.985 17:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:54.985 17:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:54.985 17:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:11:54.985 17:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:54.985 17:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:55.243 [2024-07-15 17:30:50.826525] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:55.243 17:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:55.244 "name": "Existed_Raid", 00:11:55.244 "aliases": [ 00:11:55.244 "f8ef304f-42cf-11ef-96ac-773515fba644" 00:11:55.244 ], 00:11:55.244 "product_name": "Raid Volume", 00:11:55.244 "block_size": 512, 00:11:55.244 "num_blocks": 63488, 00:11:55.244 "uuid": "f8ef304f-42cf-11ef-96ac-773515fba644", 00:11:55.244 "assigned_rate_limits": { 00:11:55.244 "rw_ios_per_sec": 0, 00:11:55.244 "rw_mbytes_per_sec": 0, 00:11:55.244 "r_mbytes_per_sec": 0, 00:11:55.244 "w_mbytes_per_sec": 0 00:11:55.244 }, 00:11:55.244 "claimed": false, 00:11:55.244 "zoned": false, 00:11:55.244 "supported_io_types": { 00:11:55.244 "read": true, 00:11:55.244 "write": true, 00:11:55.244 "unmap": false, 00:11:55.244 "flush": false, 00:11:55.244 "reset": true, 00:11:55.244 "nvme_admin": false, 00:11:55.244 "nvme_io": false, 00:11:55.244 "nvme_io_md": false, 00:11:55.244 "write_zeroes": true, 00:11:55.244 "zcopy": false, 00:11:55.244 "get_zone_info": false, 00:11:55.244 "zone_management": false, 00:11:55.244 "zone_append": false, 00:11:55.244 "compare": false, 00:11:55.244 "compare_and_write": false, 00:11:55.244 "abort": false, 00:11:55.244 "seek_hole": false, 00:11:55.244 "seek_data": false, 00:11:55.244 "copy": false, 00:11:55.244 "nvme_iov_md": false 00:11:55.244 }, 00:11:55.244 "memory_domains": [ 00:11:55.244 { 00:11:55.244 "dma_device_id": "system", 00:11:55.244 "dma_device_type": 1 00:11:55.244 }, 00:11:55.244 { 00:11:55.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.244 "dma_device_type": 2 00:11:55.244 }, 00:11:55.244 { 00:11:55.244 "dma_device_id": "system", 00:11:55.244 "dma_device_type": 1 00:11:55.244 }, 00:11:55.244 { 00:11:55.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.244 "dma_device_type": 2 00:11:55.244 }, 00:11:55.244 { 00:11:55.244 "dma_device_id": "system", 00:11:55.244 "dma_device_type": 1 00:11:55.244 }, 00:11:55.244 { 00:11:55.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.244 "dma_device_type": 2 00:11:55.244 } 00:11:55.244 ], 00:11:55.244 "driver_specific": { 00:11:55.244 "raid": { 00:11:55.244 "uuid": "f8ef304f-42cf-11ef-96ac-773515fba644", 00:11:55.244 "strip_size_kb": 0, 00:11:55.244 "state": "online", 00:11:55.244 "raid_level": "raid1", 00:11:55.244 "superblock": true, 00:11:55.244 "num_base_bdevs": 3, 00:11:55.244 "num_base_bdevs_discovered": 3, 00:11:55.244 "num_base_bdevs_operational": 3, 00:11:55.244 "base_bdevs_list": [ 00:11:55.244 { 00:11:55.244 "name": "BaseBdev1", 00:11:55.244 "uuid": "f7f9d2db-42cf-11ef-96ac-773515fba644", 00:11:55.244 "is_configured": true, 00:11:55.244 "data_offset": 2048, 00:11:55.244 "data_size": 63488 00:11:55.244 }, 00:11:55.244 { 00:11:55.244 "name": "BaseBdev2", 00:11:55.244 "uuid": "f971ceb3-42cf-11ef-96ac-773515fba644", 00:11:55.244 "is_configured": true, 00:11:55.244 "data_offset": 2048, 00:11:55.244 "data_size": 63488 00:11:55.244 }, 00:11:55.244 { 00:11:55.244 "name": "BaseBdev3", 00:11:55.244 "uuid": "fa317883-42cf-11ef-96ac-773515fba644", 00:11:55.244 "is_configured": true, 00:11:55.244 "data_offset": 2048, 00:11:55.244 "data_size": 63488 00:11:55.244 } 00:11:55.244 ] 00:11:55.244 } 00:11:55.244 } 00:11:55.244 }' 00:11:55.244 17:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:55.244 17:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:55.244 BaseBdev2 00:11:55.244 BaseBdev3' 00:11:55.244 17:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:55.244 17:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:55.244 17:30:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:55.502 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:55.502 "name": "BaseBdev1", 00:11:55.502 "aliases": [ 00:11:55.502 "f7f9d2db-42cf-11ef-96ac-773515fba644" 00:11:55.502 ], 00:11:55.502 "product_name": "Malloc disk", 00:11:55.502 "block_size": 512, 00:11:55.502 "num_blocks": 65536, 00:11:55.502 "uuid": "f7f9d2db-42cf-11ef-96ac-773515fba644", 00:11:55.502 "assigned_rate_limits": { 00:11:55.502 "rw_ios_per_sec": 0, 00:11:55.502 "rw_mbytes_per_sec": 0, 00:11:55.502 "r_mbytes_per_sec": 0, 00:11:55.502 "w_mbytes_per_sec": 0 00:11:55.502 }, 00:11:55.502 "claimed": true, 00:11:55.502 "claim_type": "exclusive_write", 00:11:55.502 "zoned": false, 00:11:55.502 "supported_io_types": { 00:11:55.502 "read": true, 00:11:55.502 "write": true, 00:11:55.502 "unmap": true, 00:11:55.502 "flush": true, 00:11:55.502 "reset": true, 00:11:55.502 "nvme_admin": false, 00:11:55.503 "nvme_io": false, 00:11:55.503 "nvme_io_md": false, 00:11:55.503 "write_zeroes": true, 00:11:55.503 "zcopy": true, 00:11:55.503 "get_zone_info": false, 00:11:55.503 "zone_management": false, 00:11:55.503 "zone_append": false, 00:11:55.503 "compare": false, 00:11:55.503 "compare_and_write": false, 00:11:55.503 "abort": true, 00:11:55.503 "seek_hole": false, 00:11:55.503 "seek_data": false, 00:11:55.503 "copy": true, 00:11:55.503 "nvme_iov_md": false 00:11:55.503 }, 00:11:55.503 "memory_domains": [ 00:11:55.503 { 00:11:55.503 "dma_device_id": "system", 00:11:55.503 "dma_device_type": 1 00:11:55.503 }, 00:11:55.503 { 00:11:55.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.503 "dma_device_type": 2 00:11:55.503 } 00:11:55.503 ], 00:11:55.503 "driver_specific": {} 00:11:55.503 }' 00:11:55.503 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:55.503 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:55.503 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:55.503 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:55.503 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:55.503 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:55.503 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:55.503 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:55.503 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:55.503 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:55.503 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:55.503 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:55.503 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:55.503 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:55.503 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:55.760 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:55.760 "name": "BaseBdev2", 00:11:55.760 "aliases": [ 00:11:55.760 "f971ceb3-42cf-11ef-96ac-773515fba644" 00:11:55.760 ], 00:11:55.760 "product_name": "Malloc disk", 00:11:55.760 "block_size": 512, 00:11:55.760 "num_blocks": 65536, 00:11:55.760 "uuid": "f971ceb3-42cf-11ef-96ac-773515fba644", 00:11:55.760 "assigned_rate_limits": { 00:11:55.760 "rw_ios_per_sec": 0, 00:11:55.760 "rw_mbytes_per_sec": 0, 00:11:55.760 "r_mbytes_per_sec": 0, 00:11:55.760 "w_mbytes_per_sec": 0 00:11:55.760 }, 00:11:55.760 "claimed": true, 00:11:55.760 "claim_type": "exclusive_write", 00:11:55.760 "zoned": false, 00:11:55.760 "supported_io_types": { 00:11:55.760 "read": true, 00:11:55.760 "write": true, 00:11:55.760 "unmap": true, 00:11:55.760 "flush": true, 00:11:55.760 "reset": true, 00:11:55.760 "nvme_admin": false, 00:11:55.760 "nvme_io": false, 00:11:55.760 "nvme_io_md": false, 00:11:55.760 "write_zeroes": true, 00:11:55.760 "zcopy": true, 00:11:55.760 "get_zone_info": false, 00:11:55.760 "zone_management": false, 00:11:55.760 "zone_append": false, 00:11:55.760 "compare": false, 00:11:55.760 "compare_and_write": false, 00:11:55.760 "abort": true, 00:11:55.760 "seek_hole": false, 00:11:55.760 "seek_data": false, 00:11:55.760 "copy": true, 00:11:55.760 "nvme_iov_md": false 00:11:55.760 }, 00:11:55.760 "memory_domains": [ 00:11:55.760 { 00:11:55.760 "dma_device_id": "system", 00:11:55.760 "dma_device_type": 1 00:11:55.760 }, 00:11:55.760 { 00:11:55.760 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.760 "dma_device_type": 2 00:11:55.760 } 00:11:55.760 ], 00:11:55.760 "driver_specific": {} 00:11:55.760 }' 00:11:55.760 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:55.760 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:55.760 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:55.760 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:55.760 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:55.760 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:55.760 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:55.760 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:55.760 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:55.760 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:55.760 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:55.760 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:55.760 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:55.760 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:55.760 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:56.017 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:56.017 "name": "BaseBdev3", 00:11:56.017 "aliases": [ 00:11:56.017 "fa317883-42cf-11ef-96ac-773515fba644" 00:11:56.017 ], 00:11:56.017 "product_name": "Malloc disk", 00:11:56.017 "block_size": 512, 00:11:56.017 "num_blocks": 65536, 00:11:56.017 "uuid": "fa317883-42cf-11ef-96ac-773515fba644", 00:11:56.017 "assigned_rate_limits": { 00:11:56.017 "rw_ios_per_sec": 0, 00:11:56.017 "rw_mbytes_per_sec": 0, 00:11:56.017 "r_mbytes_per_sec": 0, 00:11:56.017 "w_mbytes_per_sec": 0 00:11:56.017 }, 00:11:56.017 "claimed": true, 00:11:56.017 "claim_type": "exclusive_write", 00:11:56.017 "zoned": false, 00:11:56.017 "supported_io_types": { 00:11:56.017 "read": true, 00:11:56.017 "write": true, 00:11:56.017 "unmap": true, 00:11:56.017 "flush": true, 00:11:56.017 "reset": true, 00:11:56.017 "nvme_admin": false, 00:11:56.017 "nvme_io": false, 00:11:56.017 "nvme_io_md": false, 00:11:56.017 "write_zeroes": true, 00:11:56.017 "zcopy": true, 00:11:56.017 "get_zone_info": false, 00:11:56.017 "zone_management": false, 00:11:56.017 "zone_append": false, 00:11:56.017 "compare": false, 00:11:56.017 "compare_and_write": false, 00:11:56.017 "abort": true, 00:11:56.017 "seek_hole": false, 00:11:56.017 "seek_data": false, 00:11:56.017 "copy": true, 00:11:56.017 "nvme_iov_md": false 00:11:56.017 }, 00:11:56.017 "memory_domains": [ 00:11:56.017 { 00:11:56.017 "dma_device_id": "system", 00:11:56.017 "dma_device_type": 1 00:11:56.017 }, 00:11:56.017 { 00:11:56.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.017 "dma_device_type": 2 00:11:56.017 } 00:11:56.017 ], 00:11:56.017 "driver_specific": {} 00:11:56.017 }' 00:11:56.017 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:56.017 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:56.017 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:56.017 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:56.017 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:56.017 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:56.017 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:56.017 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:56.017 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:56.017 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:56.017 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:56.017 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:56.017 17:30:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:56.274 [2024-07-15 17:30:52.018592] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:56.274 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:56.274 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:11:56.274 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:56.274 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:11:56.274 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:11:56.274 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:56.274 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:56.274 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:56.274 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:56.274 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:56.274 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:56.274 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:56.274 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:56.274 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:56.274 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:56.274 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:56.274 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.532 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:56.532 "name": "Existed_Raid", 00:11:56.532 "uuid": "f8ef304f-42cf-11ef-96ac-773515fba644", 00:11:56.532 "strip_size_kb": 0, 00:11:56.532 "state": "online", 00:11:56.532 "raid_level": "raid1", 00:11:56.532 "superblock": true, 00:11:56.532 "num_base_bdevs": 3, 00:11:56.532 "num_base_bdevs_discovered": 2, 00:11:56.532 "num_base_bdevs_operational": 2, 00:11:56.532 "base_bdevs_list": [ 00:11:56.532 { 00:11:56.532 "name": null, 00:11:56.532 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.532 "is_configured": false, 00:11:56.532 "data_offset": 2048, 00:11:56.532 "data_size": 63488 00:11:56.532 }, 00:11:56.532 { 00:11:56.532 "name": "BaseBdev2", 00:11:56.532 "uuid": "f971ceb3-42cf-11ef-96ac-773515fba644", 00:11:56.532 "is_configured": true, 00:11:56.532 "data_offset": 2048, 00:11:56.532 "data_size": 63488 00:11:56.532 }, 00:11:56.532 { 00:11:56.532 "name": "BaseBdev3", 00:11:56.532 "uuid": "fa317883-42cf-11ef-96ac-773515fba644", 00:11:56.532 "is_configured": true, 00:11:56.532 "data_offset": 2048, 00:11:56.532 "data_size": 63488 00:11:56.532 } 00:11:56.532 ] 00:11:56.532 }' 00:11:56.532 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:56.532 17:30:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.098 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:57.098 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:57.098 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:57.098 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:57.356 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:57.356 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:57.356 17:30:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:57.356 [2024-07-15 17:30:53.152633] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:57.356 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:57.356 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:57.356 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:57.356 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:57.615 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:57.615 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:57.615 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:57.874 [2024-07-15 17:30:53.658636] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:57.874 [2024-07-15 17:30:53.658696] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:57.874 [2024-07-15 17:30:53.664771] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:57.874 [2024-07-15 17:30:53.664786] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:57.874 [2024-07-15 17:30:53.664790] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1714cd434a00 name Existed_Raid, state offline 00:11:57.874 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:57.874 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:57.874 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:57.874 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:58.133 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:58.133 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:58.133 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:11:58.133 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:11:58.133 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:58.133 17:30:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:58.393 BaseBdev2 00:11:58.393 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:11:58.393 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:11:58.393 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:58.393 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:58.393 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:58.393 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:58.393 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:58.651 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:58.910 [ 00:11:58.910 { 00:11:58.910 "name": "BaseBdev2", 00:11:58.910 "aliases": [ 00:11:58.910 "fcfe6173-42cf-11ef-96ac-773515fba644" 00:11:58.910 ], 00:11:58.910 "product_name": "Malloc disk", 00:11:58.910 "block_size": 512, 00:11:58.910 "num_blocks": 65536, 00:11:58.910 "uuid": "fcfe6173-42cf-11ef-96ac-773515fba644", 00:11:58.910 "assigned_rate_limits": { 00:11:58.910 "rw_ios_per_sec": 0, 00:11:58.910 "rw_mbytes_per_sec": 0, 00:11:58.910 "r_mbytes_per_sec": 0, 00:11:58.910 "w_mbytes_per_sec": 0 00:11:58.910 }, 00:11:58.910 "claimed": false, 00:11:58.910 "zoned": false, 00:11:58.910 "supported_io_types": { 00:11:58.910 "read": true, 00:11:58.910 "write": true, 00:11:58.910 "unmap": true, 00:11:58.910 "flush": true, 00:11:58.910 "reset": true, 00:11:58.910 "nvme_admin": false, 00:11:58.910 "nvme_io": false, 00:11:58.910 "nvme_io_md": false, 00:11:58.910 "write_zeroes": true, 00:11:58.910 "zcopy": true, 00:11:58.910 "get_zone_info": false, 00:11:58.910 "zone_management": false, 00:11:58.910 "zone_append": false, 00:11:58.910 "compare": false, 00:11:58.910 "compare_and_write": false, 00:11:58.910 "abort": true, 00:11:58.910 "seek_hole": false, 00:11:58.910 "seek_data": false, 00:11:58.910 "copy": true, 00:11:58.910 "nvme_iov_md": false 00:11:58.910 }, 00:11:58.910 "memory_domains": [ 00:11:58.910 { 00:11:58.910 "dma_device_id": "system", 00:11:58.910 "dma_device_type": 1 00:11:58.910 }, 00:11:58.910 { 00:11:58.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.910 "dma_device_type": 2 00:11:58.910 } 00:11:58.910 ], 00:11:58.910 "driver_specific": {} 00:11:58.910 } 00:11:58.910 ] 00:11:58.910 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:58.910 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:58.910 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:58.910 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:59.169 BaseBdev3 00:11:59.169 17:30:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:11:59.169 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:11:59.169 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:59.169 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:59.169 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:59.169 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:59.169 17:30:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:59.477 17:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:59.735 [ 00:11:59.735 { 00:11:59.735 "name": "BaseBdev3", 00:11:59.735 "aliases": [ 00:11:59.735 "fd760277-42cf-11ef-96ac-773515fba644" 00:11:59.735 ], 00:11:59.735 "product_name": "Malloc disk", 00:11:59.735 "block_size": 512, 00:11:59.735 "num_blocks": 65536, 00:11:59.735 "uuid": "fd760277-42cf-11ef-96ac-773515fba644", 00:11:59.735 "assigned_rate_limits": { 00:11:59.735 "rw_ios_per_sec": 0, 00:11:59.735 "rw_mbytes_per_sec": 0, 00:11:59.735 "r_mbytes_per_sec": 0, 00:11:59.735 "w_mbytes_per_sec": 0 00:11:59.735 }, 00:11:59.735 "claimed": false, 00:11:59.735 "zoned": false, 00:11:59.735 "supported_io_types": { 00:11:59.735 "read": true, 00:11:59.735 "write": true, 00:11:59.735 "unmap": true, 00:11:59.735 "flush": true, 00:11:59.735 "reset": true, 00:11:59.735 "nvme_admin": false, 00:11:59.735 "nvme_io": false, 00:11:59.735 "nvme_io_md": false, 00:11:59.735 "write_zeroes": true, 00:11:59.735 "zcopy": true, 00:11:59.735 "get_zone_info": false, 00:11:59.735 "zone_management": false, 00:11:59.735 "zone_append": false, 00:11:59.735 "compare": false, 00:11:59.735 "compare_and_write": false, 00:11:59.735 "abort": true, 00:11:59.735 "seek_hole": false, 00:11:59.735 "seek_data": false, 00:11:59.736 "copy": true, 00:11:59.736 "nvme_iov_md": false 00:11:59.736 }, 00:11:59.736 "memory_domains": [ 00:11:59.736 { 00:11:59.736 "dma_device_id": "system", 00:11:59.736 "dma_device_type": 1 00:11:59.736 }, 00:11:59.736 { 00:11:59.736 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.736 "dma_device_type": 2 00:11:59.736 } 00:11:59.736 ], 00:11:59.736 "driver_specific": {} 00:11:59.736 } 00:11:59.736 ] 00:11:59.736 17:30:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:59.736 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:59.736 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:59.736 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:59.994 [2024-07-15 17:30:55.820799] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:59.994 [2024-07-15 17:30:55.820851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:59.994 [2024-07-15 17:30:55.820861] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:59.994 [2024-07-15 17:30:55.821430] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:00.252 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:00.252 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:00.252 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:00.252 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:00.252 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:00.252 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:00.252 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:00.252 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:00.252 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:00.252 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:00.252 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.252 17:30:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:00.509 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:00.509 "name": "Existed_Raid", 00:12:00.509 "uuid": "fdf62fa9-42cf-11ef-96ac-773515fba644", 00:12:00.509 "strip_size_kb": 0, 00:12:00.509 "state": "configuring", 00:12:00.509 "raid_level": "raid1", 00:12:00.509 "superblock": true, 00:12:00.509 "num_base_bdevs": 3, 00:12:00.509 "num_base_bdevs_discovered": 2, 00:12:00.509 "num_base_bdevs_operational": 3, 00:12:00.509 "base_bdevs_list": [ 00:12:00.509 { 00:12:00.509 "name": "BaseBdev1", 00:12:00.509 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.509 "is_configured": false, 00:12:00.509 "data_offset": 0, 00:12:00.509 "data_size": 0 00:12:00.509 }, 00:12:00.509 { 00:12:00.509 "name": "BaseBdev2", 00:12:00.509 "uuid": "fcfe6173-42cf-11ef-96ac-773515fba644", 00:12:00.509 "is_configured": true, 00:12:00.509 "data_offset": 2048, 00:12:00.509 "data_size": 63488 00:12:00.509 }, 00:12:00.509 { 00:12:00.509 "name": "BaseBdev3", 00:12:00.509 "uuid": "fd760277-42cf-11ef-96ac-773515fba644", 00:12:00.509 "is_configured": true, 00:12:00.509 "data_offset": 2048, 00:12:00.509 "data_size": 63488 00:12:00.509 } 00:12:00.509 ] 00:12:00.509 }' 00:12:00.509 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:00.509 17:30:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.768 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:12:01.026 [2024-07-15 17:30:56.732815] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:01.026 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:01.026 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:01.026 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:01.026 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:01.026 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:01.026 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:01.026 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:01.026 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:01.027 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:01.027 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:01.027 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:01.027 17:30:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.286 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:01.286 "name": "Existed_Raid", 00:12:01.286 "uuid": "fdf62fa9-42cf-11ef-96ac-773515fba644", 00:12:01.286 "strip_size_kb": 0, 00:12:01.286 "state": "configuring", 00:12:01.286 "raid_level": "raid1", 00:12:01.286 "superblock": true, 00:12:01.286 "num_base_bdevs": 3, 00:12:01.286 "num_base_bdevs_discovered": 1, 00:12:01.286 "num_base_bdevs_operational": 3, 00:12:01.286 "base_bdevs_list": [ 00:12:01.286 { 00:12:01.286 "name": "BaseBdev1", 00:12:01.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.286 "is_configured": false, 00:12:01.286 "data_offset": 0, 00:12:01.286 "data_size": 0 00:12:01.286 }, 00:12:01.286 { 00:12:01.286 "name": null, 00:12:01.286 "uuid": "fcfe6173-42cf-11ef-96ac-773515fba644", 00:12:01.286 "is_configured": false, 00:12:01.286 "data_offset": 2048, 00:12:01.286 "data_size": 63488 00:12:01.286 }, 00:12:01.286 { 00:12:01.286 "name": "BaseBdev3", 00:12:01.286 "uuid": "fd760277-42cf-11ef-96ac-773515fba644", 00:12:01.286 "is_configured": true, 00:12:01.286 "data_offset": 2048, 00:12:01.286 "data_size": 63488 00:12:01.286 } 00:12:01.286 ] 00:12:01.286 }' 00:12:01.286 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:01.286 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.854 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:01.854 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:01.854 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:12:01.854 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:02.112 [2024-07-15 17:30:57.860998] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:02.112 BaseBdev1 00:12:02.112 17:30:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:12:02.112 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:12:02.112 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:02.112 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:02.112 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:02.112 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:02.112 17:30:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:02.371 17:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:02.630 [ 00:12:02.630 { 00:12:02.630 "name": "BaseBdev1", 00:12:02.630 "aliases": [ 00:12:02.630 "ff2d78cf-42cf-11ef-96ac-773515fba644" 00:12:02.630 ], 00:12:02.630 "product_name": "Malloc disk", 00:12:02.630 "block_size": 512, 00:12:02.630 "num_blocks": 65536, 00:12:02.630 "uuid": "ff2d78cf-42cf-11ef-96ac-773515fba644", 00:12:02.630 "assigned_rate_limits": { 00:12:02.630 "rw_ios_per_sec": 0, 00:12:02.630 "rw_mbytes_per_sec": 0, 00:12:02.630 "r_mbytes_per_sec": 0, 00:12:02.630 "w_mbytes_per_sec": 0 00:12:02.630 }, 00:12:02.630 "claimed": true, 00:12:02.630 "claim_type": "exclusive_write", 00:12:02.630 "zoned": false, 00:12:02.630 "supported_io_types": { 00:12:02.630 "read": true, 00:12:02.630 "write": true, 00:12:02.630 "unmap": true, 00:12:02.630 "flush": true, 00:12:02.630 "reset": true, 00:12:02.630 "nvme_admin": false, 00:12:02.630 "nvme_io": false, 00:12:02.630 "nvme_io_md": false, 00:12:02.630 "write_zeroes": true, 00:12:02.630 "zcopy": true, 00:12:02.630 "get_zone_info": false, 00:12:02.630 "zone_management": false, 00:12:02.630 "zone_append": false, 00:12:02.630 "compare": false, 00:12:02.630 "compare_and_write": false, 00:12:02.630 "abort": true, 00:12:02.630 "seek_hole": false, 00:12:02.630 "seek_data": false, 00:12:02.630 "copy": true, 00:12:02.630 "nvme_iov_md": false 00:12:02.630 }, 00:12:02.630 "memory_domains": [ 00:12:02.630 { 00:12:02.630 "dma_device_id": "system", 00:12:02.630 "dma_device_type": 1 00:12:02.630 }, 00:12:02.630 { 00:12:02.630 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.630 "dma_device_type": 2 00:12:02.630 } 00:12:02.630 ], 00:12:02.630 "driver_specific": {} 00:12:02.630 } 00:12:02.630 ] 00:12:02.630 17:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:02.630 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:02.630 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:02.630 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:02.630 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:02.630 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:02.630 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:02.630 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:02.630 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:02.630 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:02.630 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:02.630 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:02.630 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.918 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:02.918 "name": "Existed_Raid", 00:12:02.918 "uuid": "fdf62fa9-42cf-11ef-96ac-773515fba644", 00:12:02.918 "strip_size_kb": 0, 00:12:02.918 "state": "configuring", 00:12:02.918 "raid_level": "raid1", 00:12:02.918 "superblock": true, 00:12:02.918 "num_base_bdevs": 3, 00:12:02.918 "num_base_bdevs_discovered": 2, 00:12:02.918 "num_base_bdevs_operational": 3, 00:12:02.918 "base_bdevs_list": [ 00:12:02.918 { 00:12:02.918 "name": "BaseBdev1", 00:12:02.918 "uuid": "ff2d78cf-42cf-11ef-96ac-773515fba644", 00:12:02.918 "is_configured": true, 00:12:02.918 "data_offset": 2048, 00:12:02.918 "data_size": 63488 00:12:02.918 }, 00:12:02.918 { 00:12:02.918 "name": null, 00:12:02.918 "uuid": "fcfe6173-42cf-11ef-96ac-773515fba644", 00:12:02.918 "is_configured": false, 00:12:02.918 "data_offset": 2048, 00:12:02.918 "data_size": 63488 00:12:02.918 }, 00:12:02.918 { 00:12:02.918 "name": "BaseBdev3", 00:12:02.918 "uuid": "fd760277-42cf-11ef-96ac-773515fba644", 00:12:02.918 "is_configured": true, 00:12:02.918 "data_offset": 2048, 00:12:02.918 "data_size": 63488 00:12:02.918 } 00:12:02.918 ] 00:12:02.918 }' 00:12:02.918 17:30:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:02.918 17:30:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.176 17:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:03.176 17:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:03.434 17:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:12:03.434 17:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:12:04.001 [2024-07-15 17:30:59.532942] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:04.001 17:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:04.001 17:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:04.001 17:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:04.001 17:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:04.001 17:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:04.001 17:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:04.001 17:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:04.001 17:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:04.001 17:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:04.001 17:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:04.001 17:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:04.001 17:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.001 17:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:04.001 "name": "Existed_Raid", 00:12:04.001 "uuid": "fdf62fa9-42cf-11ef-96ac-773515fba644", 00:12:04.001 "strip_size_kb": 0, 00:12:04.001 "state": "configuring", 00:12:04.001 "raid_level": "raid1", 00:12:04.001 "superblock": true, 00:12:04.001 "num_base_bdevs": 3, 00:12:04.001 "num_base_bdevs_discovered": 1, 00:12:04.001 "num_base_bdevs_operational": 3, 00:12:04.001 "base_bdevs_list": [ 00:12:04.001 { 00:12:04.001 "name": "BaseBdev1", 00:12:04.001 "uuid": "ff2d78cf-42cf-11ef-96ac-773515fba644", 00:12:04.001 "is_configured": true, 00:12:04.001 "data_offset": 2048, 00:12:04.001 "data_size": 63488 00:12:04.001 }, 00:12:04.001 { 00:12:04.001 "name": null, 00:12:04.001 "uuid": "fcfe6173-42cf-11ef-96ac-773515fba644", 00:12:04.001 "is_configured": false, 00:12:04.001 "data_offset": 2048, 00:12:04.001 "data_size": 63488 00:12:04.001 }, 00:12:04.001 { 00:12:04.001 "name": null, 00:12:04.001 "uuid": "fd760277-42cf-11ef-96ac-773515fba644", 00:12:04.001 "is_configured": false, 00:12:04.001 "data_offset": 2048, 00:12:04.001 "data_size": 63488 00:12:04.001 } 00:12:04.001 ] 00:12:04.001 }' 00:12:04.001 17:30:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:04.001 17:30:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.567 17:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:04.567 17:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:04.826 17:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:12:04.826 17:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:05.085 [2024-07-15 17:31:00.728996] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:05.085 17:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:05.085 17:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:05.085 17:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:05.085 17:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:05.085 17:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:05.085 17:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:05.085 17:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:05.085 17:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:05.085 17:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:05.085 17:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:05.085 17:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:05.085 17:31:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.343 17:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:05.343 "name": "Existed_Raid", 00:12:05.343 "uuid": "fdf62fa9-42cf-11ef-96ac-773515fba644", 00:12:05.343 "strip_size_kb": 0, 00:12:05.343 "state": "configuring", 00:12:05.343 "raid_level": "raid1", 00:12:05.343 "superblock": true, 00:12:05.343 "num_base_bdevs": 3, 00:12:05.343 "num_base_bdevs_discovered": 2, 00:12:05.343 "num_base_bdevs_operational": 3, 00:12:05.343 "base_bdevs_list": [ 00:12:05.343 { 00:12:05.343 "name": "BaseBdev1", 00:12:05.343 "uuid": "ff2d78cf-42cf-11ef-96ac-773515fba644", 00:12:05.343 "is_configured": true, 00:12:05.343 "data_offset": 2048, 00:12:05.343 "data_size": 63488 00:12:05.343 }, 00:12:05.343 { 00:12:05.343 "name": null, 00:12:05.343 "uuid": "fcfe6173-42cf-11ef-96ac-773515fba644", 00:12:05.343 "is_configured": false, 00:12:05.343 "data_offset": 2048, 00:12:05.343 "data_size": 63488 00:12:05.343 }, 00:12:05.343 { 00:12:05.343 "name": "BaseBdev3", 00:12:05.343 "uuid": "fd760277-42cf-11ef-96ac-773515fba644", 00:12:05.343 "is_configured": true, 00:12:05.343 "data_offset": 2048, 00:12:05.343 "data_size": 63488 00:12:05.343 } 00:12:05.343 ] 00:12:05.343 }' 00:12:05.343 17:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:05.343 17:31:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.602 17:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:05.602 17:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:05.860 17:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:12:05.860 17:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:06.117 [2024-07-15 17:31:01.845064] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:06.117 17:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:06.117 17:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:06.117 17:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:06.117 17:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:06.117 17:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:06.117 17:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:06.117 17:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:06.117 17:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:06.117 17:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:06.117 17:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:06.117 17:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:06.117 17:31:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:06.375 17:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:06.375 "name": "Existed_Raid", 00:12:06.375 "uuid": "fdf62fa9-42cf-11ef-96ac-773515fba644", 00:12:06.375 "strip_size_kb": 0, 00:12:06.375 "state": "configuring", 00:12:06.375 "raid_level": "raid1", 00:12:06.375 "superblock": true, 00:12:06.375 "num_base_bdevs": 3, 00:12:06.375 "num_base_bdevs_discovered": 1, 00:12:06.375 "num_base_bdevs_operational": 3, 00:12:06.375 "base_bdevs_list": [ 00:12:06.375 { 00:12:06.375 "name": null, 00:12:06.375 "uuid": "ff2d78cf-42cf-11ef-96ac-773515fba644", 00:12:06.375 "is_configured": false, 00:12:06.375 "data_offset": 2048, 00:12:06.375 "data_size": 63488 00:12:06.375 }, 00:12:06.375 { 00:12:06.375 "name": null, 00:12:06.375 "uuid": "fcfe6173-42cf-11ef-96ac-773515fba644", 00:12:06.375 "is_configured": false, 00:12:06.375 "data_offset": 2048, 00:12:06.375 "data_size": 63488 00:12:06.375 }, 00:12:06.375 { 00:12:06.375 "name": "BaseBdev3", 00:12:06.375 "uuid": "fd760277-42cf-11ef-96ac-773515fba644", 00:12:06.375 "is_configured": true, 00:12:06.375 "data_offset": 2048, 00:12:06.375 "data_size": 63488 00:12:06.375 } 00:12:06.375 ] 00:12:06.375 }' 00:12:06.375 17:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:06.375 17:31:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:06.941 17:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:06.941 17:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:07.198 17:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:12:07.198 17:31:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:07.198 [2024-07-15 17:31:03.027000] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:07.456 17:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:07.456 17:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:07.456 17:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:07.456 17:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:07.456 17:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:07.456 17:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:07.456 17:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:07.456 17:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:07.456 17:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:07.456 17:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:07.456 17:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:07.456 17:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:07.714 17:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:07.714 "name": "Existed_Raid", 00:12:07.714 "uuid": "fdf62fa9-42cf-11ef-96ac-773515fba644", 00:12:07.714 "strip_size_kb": 0, 00:12:07.714 "state": "configuring", 00:12:07.714 "raid_level": "raid1", 00:12:07.714 "superblock": true, 00:12:07.714 "num_base_bdevs": 3, 00:12:07.714 "num_base_bdevs_discovered": 2, 00:12:07.714 "num_base_bdevs_operational": 3, 00:12:07.714 "base_bdevs_list": [ 00:12:07.714 { 00:12:07.714 "name": null, 00:12:07.714 "uuid": "ff2d78cf-42cf-11ef-96ac-773515fba644", 00:12:07.714 "is_configured": false, 00:12:07.714 "data_offset": 2048, 00:12:07.714 "data_size": 63488 00:12:07.714 }, 00:12:07.714 { 00:12:07.714 "name": "BaseBdev2", 00:12:07.714 "uuid": "fcfe6173-42cf-11ef-96ac-773515fba644", 00:12:07.714 "is_configured": true, 00:12:07.714 "data_offset": 2048, 00:12:07.714 "data_size": 63488 00:12:07.714 }, 00:12:07.714 { 00:12:07.714 "name": "BaseBdev3", 00:12:07.714 "uuid": "fd760277-42cf-11ef-96ac-773515fba644", 00:12:07.714 "is_configured": true, 00:12:07.714 "data_offset": 2048, 00:12:07.714 "data_size": 63488 00:12:07.714 } 00:12:07.714 ] 00:12:07.714 }' 00:12:07.714 17:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:07.714 17:31:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:07.972 17:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:07.972 17:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:08.229 17:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:12:08.229 17:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:08.229 17:31:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:08.487 17:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u ff2d78cf-42cf-11ef-96ac-773515fba644 00:12:08.745 [2024-07-15 17:31:04.399234] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:08.745 [2024-07-15 17:31:04.399284] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1714cd434f00 00:12:08.745 [2024-07-15 17:31:04.399289] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:08.745 [2024-07-15 17:31:04.399310] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1714cd497e20 00:12:08.745 [2024-07-15 17:31:04.399359] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1714cd434f00 00:12:08.745 [2024-07-15 17:31:04.399364] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1714cd434f00 00:12:08.746 [2024-07-15 17:31:04.399385] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.746 NewBaseBdev 00:12:08.746 17:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:12:08.746 17:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:12:08.746 17:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:08.746 17:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:08.746 17:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:08.746 17:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:08.746 17:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:09.003 17:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:09.351 [ 00:12:09.351 { 00:12:09.351 "name": "NewBaseBdev", 00:12:09.351 "aliases": [ 00:12:09.351 "ff2d78cf-42cf-11ef-96ac-773515fba644" 00:12:09.351 ], 00:12:09.351 "product_name": "Malloc disk", 00:12:09.351 "block_size": 512, 00:12:09.351 "num_blocks": 65536, 00:12:09.351 "uuid": "ff2d78cf-42cf-11ef-96ac-773515fba644", 00:12:09.351 "assigned_rate_limits": { 00:12:09.351 "rw_ios_per_sec": 0, 00:12:09.351 "rw_mbytes_per_sec": 0, 00:12:09.351 "r_mbytes_per_sec": 0, 00:12:09.351 "w_mbytes_per_sec": 0 00:12:09.351 }, 00:12:09.351 "claimed": true, 00:12:09.351 "claim_type": "exclusive_write", 00:12:09.351 "zoned": false, 00:12:09.351 "supported_io_types": { 00:12:09.351 "read": true, 00:12:09.351 "write": true, 00:12:09.351 "unmap": true, 00:12:09.351 "flush": true, 00:12:09.351 "reset": true, 00:12:09.351 "nvme_admin": false, 00:12:09.351 "nvme_io": false, 00:12:09.351 "nvme_io_md": false, 00:12:09.351 "write_zeroes": true, 00:12:09.351 "zcopy": true, 00:12:09.351 "get_zone_info": false, 00:12:09.351 "zone_management": false, 00:12:09.351 "zone_append": false, 00:12:09.351 "compare": false, 00:12:09.351 "compare_and_write": false, 00:12:09.351 "abort": true, 00:12:09.351 "seek_hole": false, 00:12:09.351 "seek_data": false, 00:12:09.351 "copy": true, 00:12:09.351 "nvme_iov_md": false 00:12:09.351 }, 00:12:09.351 "memory_domains": [ 00:12:09.351 { 00:12:09.351 "dma_device_id": "system", 00:12:09.351 "dma_device_type": 1 00:12:09.351 }, 00:12:09.351 { 00:12:09.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.351 "dma_device_type": 2 00:12:09.352 } 00:12:09.352 ], 00:12:09.352 "driver_specific": {} 00:12:09.352 } 00:12:09.352 ] 00:12:09.352 17:31:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:09.352 17:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:09.352 17:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:09.352 17:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:09.352 17:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:09.352 17:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:09.352 17:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:09.352 17:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:09.352 17:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:09.352 17:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:09.352 17:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:09.352 17:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:09.352 17:31:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.610 17:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:09.610 "name": "Existed_Raid", 00:12:09.610 "uuid": "fdf62fa9-42cf-11ef-96ac-773515fba644", 00:12:09.610 "strip_size_kb": 0, 00:12:09.610 "state": "online", 00:12:09.610 "raid_level": "raid1", 00:12:09.610 "superblock": true, 00:12:09.610 "num_base_bdevs": 3, 00:12:09.610 "num_base_bdevs_discovered": 3, 00:12:09.610 "num_base_bdevs_operational": 3, 00:12:09.610 "base_bdevs_list": [ 00:12:09.610 { 00:12:09.610 "name": "NewBaseBdev", 00:12:09.610 "uuid": "ff2d78cf-42cf-11ef-96ac-773515fba644", 00:12:09.610 "is_configured": true, 00:12:09.610 "data_offset": 2048, 00:12:09.610 "data_size": 63488 00:12:09.610 }, 00:12:09.610 { 00:12:09.610 "name": "BaseBdev2", 00:12:09.610 "uuid": "fcfe6173-42cf-11ef-96ac-773515fba644", 00:12:09.610 "is_configured": true, 00:12:09.610 "data_offset": 2048, 00:12:09.610 "data_size": 63488 00:12:09.610 }, 00:12:09.610 { 00:12:09.610 "name": "BaseBdev3", 00:12:09.610 "uuid": "fd760277-42cf-11ef-96ac-773515fba644", 00:12:09.610 "is_configured": true, 00:12:09.610 "data_offset": 2048, 00:12:09.610 "data_size": 63488 00:12:09.610 } 00:12:09.610 ] 00:12:09.610 }' 00:12:09.610 17:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:09.610 17:31:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.868 17:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:12:09.868 17:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:09.868 17:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:09.868 17:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:09.868 17:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:09.868 17:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:12:09.868 17:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:09.868 17:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:10.126 [2024-07-15 17:31:05.795149] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:10.126 17:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:10.126 "name": "Existed_Raid", 00:12:10.126 "aliases": [ 00:12:10.126 "fdf62fa9-42cf-11ef-96ac-773515fba644" 00:12:10.126 ], 00:12:10.126 "product_name": "Raid Volume", 00:12:10.126 "block_size": 512, 00:12:10.126 "num_blocks": 63488, 00:12:10.126 "uuid": "fdf62fa9-42cf-11ef-96ac-773515fba644", 00:12:10.126 "assigned_rate_limits": { 00:12:10.126 "rw_ios_per_sec": 0, 00:12:10.126 "rw_mbytes_per_sec": 0, 00:12:10.126 "r_mbytes_per_sec": 0, 00:12:10.126 "w_mbytes_per_sec": 0 00:12:10.126 }, 00:12:10.126 "claimed": false, 00:12:10.126 "zoned": false, 00:12:10.126 "supported_io_types": { 00:12:10.126 "read": true, 00:12:10.126 "write": true, 00:12:10.126 "unmap": false, 00:12:10.126 "flush": false, 00:12:10.126 "reset": true, 00:12:10.126 "nvme_admin": false, 00:12:10.126 "nvme_io": false, 00:12:10.126 "nvme_io_md": false, 00:12:10.126 "write_zeroes": true, 00:12:10.126 "zcopy": false, 00:12:10.126 "get_zone_info": false, 00:12:10.126 "zone_management": false, 00:12:10.126 "zone_append": false, 00:12:10.126 "compare": false, 00:12:10.126 "compare_and_write": false, 00:12:10.126 "abort": false, 00:12:10.126 "seek_hole": false, 00:12:10.126 "seek_data": false, 00:12:10.126 "copy": false, 00:12:10.126 "nvme_iov_md": false 00:12:10.126 }, 00:12:10.126 "memory_domains": [ 00:12:10.126 { 00:12:10.126 "dma_device_id": "system", 00:12:10.126 "dma_device_type": 1 00:12:10.126 }, 00:12:10.126 { 00:12:10.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.126 "dma_device_type": 2 00:12:10.126 }, 00:12:10.126 { 00:12:10.126 "dma_device_id": "system", 00:12:10.126 "dma_device_type": 1 00:12:10.126 }, 00:12:10.126 { 00:12:10.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.126 "dma_device_type": 2 00:12:10.126 }, 00:12:10.126 { 00:12:10.126 "dma_device_id": "system", 00:12:10.126 "dma_device_type": 1 00:12:10.126 }, 00:12:10.126 { 00:12:10.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.126 "dma_device_type": 2 00:12:10.126 } 00:12:10.126 ], 00:12:10.126 "driver_specific": { 00:12:10.126 "raid": { 00:12:10.127 "uuid": "fdf62fa9-42cf-11ef-96ac-773515fba644", 00:12:10.127 "strip_size_kb": 0, 00:12:10.127 "state": "online", 00:12:10.127 "raid_level": "raid1", 00:12:10.127 "superblock": true, 00:12:10.127 "num_base_bdevs": 3, 00:12:10.127 "num_base_bdevs_discovered": 3, 00:12:10.127 "num_base_bdevs_operational": 3, 00:12:10.127 "base_bdevs_list": [ 00:12:10.127 { 00:12:10.127 "name": "NewBaseBdev", 00:12:10.127 "uuid": "ff2d78cf-42cf-11ef-96ac-773515fba644", 00:12:10.127 "is_configured": true, 00:12:10.127 "data_offset": 2048, 00:12:10.127 "data_size": 63488 00:12:10.127 }, 00:12:10.127 { 00:12:10.127 "name": "BaseBdev2", 00:12:10.127 "uuid": "fcfe6173-42cf-11ef-96ac-773515fba644", 00:12:10.127 "is_configured": true, 00:12:10.127 "data_offset": 2048, 00:12:10.127 "data_size": 63488 00:12:10.127 }, 00:12:10.127 { 00:12:10.127 "name": "BaseBdev3", 00:12:10.127 "uuid": "fd760277-42cf-11ef-96ac-773515fba644", 00:12:10.127 "is_configured": true, 00:12:10.127 "data_offset": 2048, 00:12:10.127 "data_size": 63488 00:12:10.127 } 00:12:10.127 ] 00:12:10.127 } 00:12:10.127 } 00:12:10.127 }' 00:12:10.127 17:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:10.127 17:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:12:10.127 BaseBdev2 00:12:10.127 BaseBdev3' 00:12:10.127 17:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:10.127 17:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:12:10.127 17:31:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:10.384 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:10.384 "name": "NewBaseBdev", 00:12:10.384 "aliases": [ 00:12:10.384 "ff2d78cf-42cf-11ef-96ac-773515fba644" 00:12:10.384 ], 00:12:10.384 "product_name": "Malloc disk", 00:12:10.384 "block_size": 512, 00:12:10.384 "num_blocks": 65536, 00:12:10.384 "uuid": "ff2d78cf-42cf-11ef-96ac-773515fba644", 00:12:10.384 "assigned_rate_limits": { 00:12:10.384 "rw_ios_per_sec": 0, 00:12:10.384 "rw_mbytes_per_sec": 0, 00:12:10.384 "r_mbytes_per_sec": 0, 00:12:10.384 "w_mbytes_per_sec": 0 00:12:10.384 }, 00:12:10.384 "claimed": true, 00:12:10.384 "claim_type": "exclusive_write", 00:12:10.384 "zoned": false, 00:12:10.384 "supported_io_types": { 00:12:10.384 "read": true, 00:12:10.384 "write": true, 00:12:10.384 "unmap": true, 00:12:10.384 "flush": true, 00:12:10.384 "reset": true, 00:12:10.384 "nvme_admin": false, 00:12:10.384 "nvme_io": false, 00:12:10.384 "nvme_io_md": false, 00:12:10.384 "write_zeroes": true, 00:12:10.384 "zcopy": true, 00:12:10.384 "get_zone_info": false, 00:12:10.384 "zone_management": false, 00:12:10.384 "zone_append": false, 00:12:10.384 "compare": false, 00:12:10.384 "compare_and_write": false, 00:12:10.384 "abort": true, 00:12:10.384 "seek_hole": false, 00:12:10.384 "seek_data": false, 00:12:10.384 "copy": true, 00:12:10.384 "nvme_iov_md": false 00:12:10.384 }, 00:12:10.384 "memory_domains": [ 00:12:10.384 { 00:12:10.384 "dma_device_id": "system", 00:12:10.384 "dma_device_type": 1 00:12:10.384 }, 00:12:10.384 { 00:12:10.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.384 "dma_device_type": 2 00:12:10.384 } 00:12:10.384 ], 00:12:10.384 "driver_specific": {} 00:12:10.384 }' 00:12:10.384 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:10.384 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:10.384 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:10.384 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:10.384 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:10.384 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:10.384 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:10.384 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:10.384 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:10.384 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:10.384 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:10.384 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:10.384 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:10.384 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:10.384 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:10.642 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:10.642 "name": "BaseBdev2", 00:12:10.642 "aliases": [ 00:12:10.642 "fcfe6173-42cf-11ef-96ac-773515fba644" 00:12:10.642 ], 00:12:10.642 "product_name": "Malloc disk", 00:12:10.642 "block_size": 512, 00:12:10.642 "num_blocks": 65536, 00:12:10.642 "uuid": "fcfe6173-42cf-11ef-96ac-773515fba644", 00:12:10.642 "assigned_rate_limits": { 00:12:10.642 "rw_ios_per_sec": 0, 00:12:10.642 "rw_mbytes_per_sec": 0, 00:12:10.642 "r_mbytes_per_sec": 0, 00:12:10.642 "w_mbytes_per_sec": 0 00:12:10.642 }, 00:12:10.642 "claimed": true, 00:12:10.642 "claim_type": "exclusive_write", 00:12:10.642 "zoned": false, 00:12:10.642 "supported_io_types": { 00:12:10.642 "read": true, 00:12:10.642 "write": true, 00:12:10.642 "unmap": true, 00:12:10.642 "flush": true, 00:12:10.642 "reset": true, 00:12:10.642 "nvme_admin": false, 00:12:10.642 "nvme_io": false, 00:12:10.642 "nvme_io_md": false, 00:12:10.642 "write_zeroes": true, 00:12:10.642 "zcopy": true, 00:12:10.642 "get_zone_info": false, 00:12:10.642 "zone_management": false, 00:12:10.642 "zone_append": false, 00:12:10.642 "compare": false, 00:12:10.642 "compare_and_write": false, 00:12:10.642 "abort": true, 00:12:10.642 "seek_hole": false, 00:12:10.642 "seek_data": false, 00:12:10.642 "copy": true, 00:12:10.642 "nvme_iov_md": false 00:12:10.642 }, 00:12:10.642 "memory_domains": [ 00:12:10.642 { 00:12:10.642 "dma_device_id": "system", 00:12:10.642 "dma_device_type": 1 00:12:10.642 }, 00:12:10.642 { 00:12:10.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.642 "dma_device_type": 2 00:12:10.642 } 00:12:10.642 ], 00:12:10.642 "driver_specific": {} 00:12:10.642 }' 00:12:10.642 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:10.642 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:10.642 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:10.642 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:10.900 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:10.900 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:10.900 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:10.900 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:10.900 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:10.900 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:10.900 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:10.900 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:10.900 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:10.900 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:10.900 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:11.158 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:11.158 "name": "BaseBdev3", 00:12:11.158 "aliases": [ 00:12:11.158 "fd760277-42cf-11ef-96ac-773515fba644" 00:12:11.158 ], 00:12:11.158 "product_name": "Malloc disk", 00:12:11.158 "block_size": 512, 00:12:11.158 "num_blocks": 65536, 00:12:11.158 "uuid": "fd760277-42cf-11ef-96ac-773515fba644", 00:12:11.158 "assigned_rate_limits": { 00:12:11.158 "rw_ios_per_sec": 0, 00:12:11.158 "rw_mbytes_per_sec": 0, 00:12:11.158 "r_mbytes_per_sec": 0, 00:12:11.158 "w_mbytes_per_sec": 0 00:12:11.158 }, 00:12:11.158 "claimed": true, 00:12:11.158 "claim_type": "exclusive_write", 00:12:11.158 "zoned": false, 00:12:11.158 "supported_io_types": { 00:12:11.158 "read": true, 00:12:11.158 "write": true, 00:12:11.158 "unmap": true, 00:12:11.158 "flush": true, 00:12:11.158 "reset": true, 00:12:11.158 "nvme_admin": false, 00:12:11.158 "nvme_io": false, 00:12:11.158 "nvme_io_md": false, 00:12:11.158 "write_zeroes": true, 00:12:11.158 "zcopy": true, 00:12:11.158 "get_zone_info": false, 00:12:11.158 "zone_management": false, 00:12:11.158 "zone_append": false, 00:12:11.158 "compare": false, 00:12:11.158 "compare_and_write": false, 00:12:11.158 "abort": true, 00:12:11.158 "seek_hole": false, 00:12:11.158 "seek_data": false, 00:12:11.158 "copy": true, 00:12:11.158 "nvme_iov_md": false 00:12:11.158 }, 00:12:11.158 "memory_domains": [ 00:12:11.158 { 00:12:11.158 "dma_device_id": "system", 00:12:11.158 "dma_device_type": 1 00:12:11.158 }, 00:12:11.158 { 00:12:11.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.158 "dma_device_type": 2 00:12:11.158 } 00:12:11.158 ], 00:12:11.158 "driver_specific": {} 00:12:11.158 }' 00:12:11.158 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:11.158 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:11.158 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:11.158 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:11.158 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:11.158 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:11.158 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:11.158 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:11.158 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:11.158 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:11.158 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:11.158 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:11.158 17:31:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:11.416 [2024-07-15 17:31:07.075126] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:11.416 [2024-07-15 17:31:07.075146] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:11.416 [2024-07-15 17:31:07.075168] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:11.416 [2024-07-15 17:31:07.075234] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:11.416 [2024-07-15 17:31:07.075240] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1714cd434f00 name Existed_Raid, state offline 00:12:11.416 17:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 56821 00:12:11.416 17:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 56821 ']' 00:12:11.416 17:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 56821 00:12:11.416 17:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:12:11.416 17:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:12:11.416 17:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 56821 00:12:11.416 17:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:12:11.416 17:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:12:11.416 killing process with pid 56821 00:12:11.416 17:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:12:11.416 17:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 56821' 00:12:11.416 17:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 56821 00:12:11.416 [2024-07-15 17:31:07.101685] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:11.416 17:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 56821 00:12:11.416 [2024-07-15 17:31:07.119311] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:11.674 17:31:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:12:11.674 00:12:11.674 real 0m24.069s 00:12:11.674 user 0m43.927s 00:12:11.674 sys 0m3.431s 00:12:11.674 ************************************ 00:12:11.674 END TEST raid_state_function_test_sb 00:12:11.674 ************************************ 00:12:11.674 17:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:11.674 17:31:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.674 17:31:07 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:11.674 17:31:07 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:12:11.674 17:31:07 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:11.675 17:31:07 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:11.675 17:31:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:11.675 ************************************ 00:12:11.675 START TEST raid_superblock_test 00:12:11.675 ************************************ 00:12:11.675 17:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 3 00:12:11.675 17:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:12:11.675 17:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:12:11.675 17:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:12:11.675 17:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:12:11.675 17:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:12:11.675 17:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:12:11.675 17:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:12:11.675 17:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:12:11.675 17:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:12:11.675 17:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:12:11.675 17:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:12:11.675 17:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:12:11.675 17:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:12:11.675 17:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:12:11.675 17:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:12:11.675 17:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=57549 00:12:11.675 17:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 57549 /var/tmp/spdk-raid.sock 00:12:11.675 17:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 57549 ']' 00:12:11.675 17:31:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:12:11.675 17:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:11.675 17:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:11.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:11.675 17:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:11.675 17:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:11.675 17:31:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.675 [2024-07-15 17:31:07.357564] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:12:11.675 [2024-07-15 17:31:07.357717] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:12:12.241 EAL: TSC is not safe to use in SMP mode 00:12:12.241 EAL: TSC is not invariant 00:12:12.241 [2024-07-15 17:31:07.882443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.241 [2024-07-15 17:31:07.966511] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:12:12.241 [2024-07-15 17:31:07.968730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.241 [2024-07-15 17:31:07.969566] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.241 [2024-07-15 17:31:07.969581] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.850 17:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:12.850 17:31:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:12:12.850 17:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:12:12.850 17:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:12.850 17:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:12:12.850 17:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:12:12.850 17:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:12.850 17:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:12.850 17:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:12.850 17:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:12.850 17:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:12:12.850 malloc1 00:12:13.108 17:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:13.367 [2024-07-15 17:31:08.950638] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:13.367 [2024-07-15 17:31:08.950694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.367 [2024-07-15 17:31:08.950723] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d948f634780 00:12:13.368 [2024-07-15 17:31:08.950732] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.368 [2024-07-15 17:31:08.951704] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.368 [2024-07-15 17:31:08.951737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:13.368 pt1 00:12:13.368 17:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:13.368 17:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:13.368 17:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:12:13.368 17:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:12:13.368 17:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:13.368 17:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:13.368 17:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:13.368 17:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:13.368 17:31:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:12:13.626 malloc2 00:12:13.626 17:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:13.626 [2024-07-15 17:31:09.450633] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:13.626 [2024-07-15 17:31:09.450683] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:13.626 [2024-07-15 17:31:09.450713] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d948f634c80 00:12:13.626 [2024-07-15 17:31:09.450721] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:13.626 [2024-07-15 17:31:09.451453] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:13.626 [2024-07-15 17:31:09.451484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:13.626 pt2 00:12:13.885 17:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:13.885 17:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:13.885 17:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:12:13.885 17:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:12:13.885 17:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:13.885 17:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:13.885 17:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:13.885 17:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:13.885 17:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:12:13.885 malloc3 00:12:13.885 17:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:14.143 [2024-07-15 17:31:09.954674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:14.143 [2024-07-15 17:31:09.954741] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:14.143 [2024-07-15 17:31:09.954754] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d948f635180 00:12:14.143 [2024-07-15 17:31:09.954762] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:14.143 [2024-07-15 17:31:09.955449] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:14.143 [2024-07-15 17:31:09.955478] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:14.143 pt3 00:12:14.401 17:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:14.401 17:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:14.401 17:31:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:12:14.660 [2024-07-15 17:31:10.234702] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:14.660 [2024-07-15 17:31:10.235305] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:14.660 [2024-07-15 17:31:10.235330] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:14.660 [2024-07-15 17:31:10.235394] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d948f635400 00:12:14.660 [2024-07-15 17:31:10.235400] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:14.660 [2024-07-15 17:31:10.235441] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d948f697e20 00:12:14.660 [2024-07-15 17:31:10.235540] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d948f635400 00:12:14.660 [2024-07-15 17:31:10.235546] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1d948f635400 00:12:14.660 [2024-07-15 17:31:10.235574] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.660 17:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:14.660 17:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:14.660 17:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:14.660 17:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:14.660 17:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:14.660 17:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:14.660 17:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:14.660 17:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:14.660 17:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:14.660 17:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:14.660 17:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:14.660 17:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.918 17:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:14.918 "name": "raid_bdev1", 00:12:14.918 "uuid": "068d9200-42d0-11ef-96ac-773515fba644", 00:12:14.918 "strip_size_kb": 0, 00:12:14.918 "state": "online", 00:12:14.918 "raid_level": "raid1", 00:12:14.918 "superblock": true, 00:12:14.918 "num_base_bdevs": 3, 00:12:14.918 "num_base_bdevs_discovered": 3, 00:12:14.918 "num_base_bdevs_operational": 3, 00:12:14.918 "base_bdevs_list": [ 00:12:14.918 { 00:12:14.918 "name": "pt1", 00:12:14.918 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:14.918 "is_configured": true, 00:12:14.918 "data_offset": 2048, 00:12:14.919 "data_size": 63488 00:12:14.919 }, 00:12:14.919 { 00:12:14.919 "name": "pt2", 00:12:14.919 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:14.919 "is_configured": true, 00:12:14.919 "data_offset": 2048, 00:12:14.919 "data_size": 63488 00:12:14.919 }, 00:12:14.919 { 00:12:14.919 "name": "pt3", 00:12:14.919 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:14.919 "is_configured": true, 00:12:14.919 "data_offset": 2048, 00:12:14.919 "data_size": 63488 00:12:14.919 } 00:12:14.919 ] 00:12:14.919 }' 00:12:14.919 17:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:14.919 17:31:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:15.176 17:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:12:15.176 17:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:15.176 17:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:15.176 17:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:15.176 17:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:15.176 17:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:15.176 17:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:15.176 17:31:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:15.435 [2024-07-15 17:31:11.090749] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:15.435 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:15.435 "name": "raid_bdev1", 00:12:15.435 "aliases": [ 00:12:15.435 "068d9200-42d0-11ef-96ac-773515fba644" 00:12:15.435 ], 00:12:15.435 "product_name": "Raid Volume", 00:12:15.435 "block_size": 512, 00:12:15.435 "num_blocks": 63488, 00:12:15.435 "uuid": "068d9200-42d0-11ef-96ac-773515fba644", 00:12:15.435 "assigned_rate_limits": { 00:12:15.435 "rw_ios_per_sec": 0, 00:12:15.435 "rw_mbytes_per_sec": 0, 00:12:15.435 "r_mbytes_per_sec": 0, 00:12:15.435 "w_mbytes_per_sec": 0 00:12:15.435 }, 00:12:15.435 "claimed": false, 00:12:15.435 "zoned": false, 00:12:15.435 "supported_io_types": { 00:12:15.435 "read": true, 00:12:15.435 "write": true, 00:12:15.435 "unmap": false, 00:12:15.435 "flush": false, 00:12:15.435 "reset": true, 00:12:15.435 "nvme_admin": false, 00:12:15.435 "nvme_io": false, 00:12:15.435 "nvme_io_md": false, 00:12:15.435 "write_zeroes": true, 00:12:15.435 "zcopy": false, 00:12:15.435 "get_zone_info": false, 00:12:15.435 "zone_management": false, 00:12:15.435 "zone_append": false, 00:12:15.435 "compare": false, 00:12:15.435 "compare_and_write": false, 00:12:15.435 "abort": false, 00:12:15.435 "seek_hole": false, 00:12:15.435 "seek_data": false, 00:12:15.435 "copy": false, 00:12:15.435 "nvme_iov_md": false 00:12:15.435 }, 00:12:15.435 "memory_domains": [ 00:12:15.435 { 00:12:15.435 "dma_device_id": "system", 00:12:15.435 "dma_device_type": 1 00:12:15.435 }, 00:12:15.435 { 00:12:15.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.435 "dma_device_type": 2 00:12:15.435 }, 00:12:15.435 { 00:12:15.435 "dma_device_id": "system", 00:12:15.435 "dma_device_type": 1 00:12:15.435 }, 00:12:15.435 { 00:12:15.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.435 "dma_device_type": 2 00:12:15.435 }, 00:12:15.435 { 00:12:15.435 "dma_device_id": "system", 00:12:15.435 "dma_device_type": 1 00:12:15.435 }, 00:12:15.435 { 00:12:15.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.435 "dma_device_type": 2 00:12:15.435 } 00:12:15.435 ], 00:12:15.435 "driver_specific": { 00:12:15.435 "raid": { 00:12:15.435 "uuid": "068d9200-42d0-11ef-96ac-773515fba644", 00:12:15.435 "strip_size_kb": 0, 00:12:15.435 "state": "online", 00:12:15.435 "raid_level": "raid1", 00:12:15.435 "superblock": true, 00:12:15.435 "num_base_bdevs": 3, 00:12:15.435 "num_base_bdevs_discovered": 3, 00:12:15.435 "num_base_bdevs_operational": 3, 00:12:15.435 "base_bdevs_list": [ 00:12:15.435 { 00:12:15.435 "name": "pt1", 00:12:15.435 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:15.435 "is_configured": true, 00:12:15.435 "data_offset": 2048, 00:12:15.435 "data_size": 63488 00:12:15.435 }, 00:12:15.435 { 00:12:15.435 "name": "pt2", 00:12:15.435 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:15.435 "is_configured": true, 00:12:15.435 "data_offset": 2048, 00:12:15.435 "data_size": 63488 00:12:15.435 }, 00:12:15.435 { 00:12:15.435 "name": "pt3", 00:12:15.435 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:15.435 "is_configured": true, 00:12:15.435 "data_offset": 2048, 00:12:15.435 "data_size": 63488 00:12:15.435 } 00:12:15.435 ] 00:12:15.435 } 00:12:15.435 } 00:12:15.435 }' 00:12:15.435 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:15.435 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:15.435 pt2 00:12:15.435 pt3' 00:12:15.435 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:15.435 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:15.435 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:15.769 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:15.769 "name": "pt1", 00:12:15.769 "aliases": [ 00:12:15.769 "00000000-0000-0000-0000-000000000001" 00:12:15.769 ], 00:12:15.769 "product_name": "passthru", 00:12:15.769 "block_size": 512, 00:12:15.769 "num_blocks": 65536, 00:12:15.769 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:15.769 "assigned_rate_limits": { 00:12:15.769 "rw_ios_per_sec": 0, 00:12:15.769 "rw_mbytes_per_sec": 0, 00:12:15.769 "r_mbytes_per_sec": 0, 00:12:15.769 "w_mbytes_per_sec": 0 00:12:15.769 }, 00:12:15.769 "claimed": true, 00:12:15.769 "claim_type": "exclusive_write", 00:12:15.769 "zoned": false, 00:12:15.769 "supported_io_types": { 00:12:15.769 "read": true, 00:12:15.769 "write": true, 00:12:15.769 "unmap": true, 00:12:15.770 "flush": true, 00:12:15.770 "reset": true, 00:12:15.770 "nvme_admin": false, 00:12:15.770 "nvme_io": false, 00:12:15.770 "nvme_io_md": false, 00:12:15.770 "write_zeroes": true, 00:12:15.770 "zcopy": true, 00:12:15.770 "get_zone_info": false, 00:12:15.770 "zone_management": false, 00:12:15.770 "zone_append": false, 00:12:15.770 "compare": false, 00:12:15.770 "compare_and_write": false, 00:12:15.770 "abort": true, 00:12:15.770 "seek_hole": false, 00:12:15.770 "seek_data": false, 00:12:15.770 "copy": true, 00:12:15.770 "nvme_iov_md": false 00:12:15.770 }, 00:12:15.770 "memory_domains": [ 00:12:15.770 { 00:12:15.770 "dma_device_id": "system", 00:12:15.770 "dma_device_type": 1 00:12:15.770 }, 00:12:15.770 { 00:12:15.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.770 "dma_device_type": 2 00:12:15.770 } 00:12:15.770 ], 00:12:15.770 "driver_specific": { 00:12:15.770 "passthru": { 00:12:15.770 "name": "pt1", 00:12:15.770 "base_bdev_name": "malloc1" 00:12:15.770 } 00:12:15.770 } 00:12:15.770 }' 00:12:15.770 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:15.770 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:15.770 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:15.770 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:15.770 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:15.770 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:15.770 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:15.770 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:15.770 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:15.770 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:15.770 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:15.770 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:15.770 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:15.770 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:15.770 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:16.056 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:16.056 "name": "pt2", 00:12:16.056 "aliases": [ 00:12:16.056 "00000000-0000-0000-0000-000000000002" 00:12:16.056 ], 00:12:16.056 "product_name": "passthru", 00:12:16.056 "block_size": 512, 00:12:16.056 "num_blocks": 65536, 00:12:16.056 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:16.056 "assigned_rate_limits": { 00:12:16.056 "rw_ios_per_sec": 0, 00:12:16.056 "rw_mbytes_per_sec": 0, 00:12:16.056 "r_mbytes_per_sec": 0, 00:12:16.056 "w_mbytes_per_sec": 0 00:12:16.056 }, 00:12:16.056 "claimed": true, 00:12:16.056 "claim_type": "exclusive_write", 00:12:16.056 "zoned": false, 00:12:16.056 "supported_io_types": { 00:12:16.056 "read": true, 00:12:16.056 "write": true, 00:12:16.056 "unmap": true, 00:12:16.056 "flush": true, 00:12:16.056 "reset": true, 00:12:16.056 "nvme_admin": false, 00:12:16.056 "nvme_io": false, 00:12:16.056 "nvme_io_md": false, 00:12:16.056 "write_zeroes": true, 00:12:16.056 "zcopy": true, 00:12:16.056 "get_zone_info": false, 00:12:16.056 "zone_management": false, 00:12:16.056 "zone_append": false, 00:12:16.056 "compare": false, 00:12:16.056 "compare_and_write": false, 00:12:16.056 "abort": true, 00:12:16.056 "seek_hole": false, 00:12:16.056 "seek_data": false, 00:12:16.056 "copy": true, 00:12:16.056 "nvme_iov_md": false 00:12:16.056 }, 00:12:16.056 "memory_domains": [ 00:12:16.056 { 00:12:16.056 "dma_device_id": "system", 00:12:16.056 "dma_device_type": 1 00:12:16.056 }, 00:12:16.056 { 00:12:16.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.056 "dma_device_type": 2 00:12:16.056 } 00:12:16.056 ], 00:12:16.056 "driver_specific": { 00:12:16.056 "passthru": { 00:12:16.056 "name": "pt2", 00:12:16.056 "base_bdev_name": "malloc2" 00:12:16.056 } 00:12:16.056 } 00:12:16.056 }' 00:12:16.056 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:16.056 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:16.056 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:16.056 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:16.056 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:16.056 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:16.056 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:16.056 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:16.056 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:16.056 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:16.056 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:16.056 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:16.056 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:16.056 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:12:16.056 17:31:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:16.315 17:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:16.315 "name": "pt3", 00:12:16.315 "aliases": [ 00:12:16.315 "00000000-0000-0000-0000-000000000003" 00:12:16.315 ], 00:12:16.315 "product_name": "passthru", 00:12:16.315 "block_size": 512, 00:12:16.315 "num_blocks": 65536, 00:12:16.315 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:16.315 "assigned_rate_limits": { 00:12:16.315 "rw_ios_per_sec": 0, 00:12:16.315 "rw_mbytes_per_sec": 0, 00:12:16.315 "r_mbytes_per_sec": 0, 00:12:16.315 "w_mbytes_per_sec": 0 00:12:16.315 }, 00:12:16.315 "claimed": true, 00:12:16.315 "claim_type": "exclusive_write", 00:12:16.315 "zoned": false, 00:12:16.315 "supported_io_types": { 00:12:16.315 "read": true, 00:12:16.315 "write": true, 00:12:16.315 "unmap": true, 00:12:16.315 "flush": true, 00:12:16.315 "reset": true, 00:12:16.315 "nvme_admin": false, 00:12:16.315 "nvme_io": false, 00:12:16.315 "nvme_io_md": false, 00:12:16.315 "write_zeroes": true, 00:12:16.315 "zcopy": true, 00:12:16.315 "get_zone_info": false, 00:12:16.315 "zone_management": false, 00:12:16.315 "zone_append": false, 00:12:16.315 "compare": false, 00:12:16.315 "compare_and_write": false, 00:12:16.315 "abort": true, 00:12:16.315 "seek_hole": false, 00:12:16.315 "seek_data": false, 00:12:16.315 "copy": true, 00:12:16.315 "nvme_iov_md": false 00:12:16.315 }, 00:12:16.315 "memory_domains": [ 00:12:16.315 { 00:12:16.315 "dma_device_id": "system", 00:12:16.316 "dma_device_type": 1 00:12:16.316 }, 00:12:16.316 { 00:12:16.316 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.316 "dma_device_type": 2 00:12:16.316 } 00:12:16.316 ], 00:12:16.316 "driver_specific": { 00:12:16.316 "passthru": { 00:12:16.316 "name": "pt3", 00:12:16.316 "base_bdev_name": "malloc3" 00:12:16.316 } 00:12:16.316 } 00:12:16.316 }' 00:12:16.316 17:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:16.316 17:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:16.316 17:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:16.316 17:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:16.316 17:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:16.316 17:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:16.316 17:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:16.316 17:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:16.576 17:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:16.576 17:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:16.576 17:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:16.576 17:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:16.576 17:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:16.576 17:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:12:16.835 [2024-07-15 17:31:12.430794] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:16.835 17:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=068d9200-42d0-11ef-96ac-773515fba644 00:12:16.835 17:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 068d9200-42d0-11ef-96ac-773515fba644 ']' 00:12:16.835 17:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:17.093 [2024-07-15 17:31:12.722724] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:17.093 [2024-07-15 17:31:12.722750] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:17.093 [2024-07-15 17:31:12.722773] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:17.093 [2024-07-15 17:31:12.722789] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:17.093 [2024-07-15 17:31:12.722793] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d948f635400 name raid_bdev1, state offline 00:12:17.093 17:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:17.093 17:31:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:12:17.351 17:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:12:17.351 17:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:12:17.351 17:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:17.351 17:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:17.616 17:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:17.616 17:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:17.873 17:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:17.873 17:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:18.131 17:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:18.131 17:31:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:12:18.388 17:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:12:18.388 17:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:18.388 17:31:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:12:18.388 17:31:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:18.388 17:31:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:18.388 17:31:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:18.388 17:31:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:18.388 17:31:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:18.388 17:31:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:18.388 17:31:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:18.388 17:31:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:18.388 17:31:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:18.388 17:31:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:18.646 [2024-07-15 17:31:14.350792] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:18.646 [2024-07-15 17:31:14.351433] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:18.646 [2024-07-15 17:31:14.351454] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:18.646 [2024-07-15 17:31:14.351469] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:18.646 [2024-07-15 17:31:14.351507] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:18.646 [2024-07-15 17:31:14.351534] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:18.646 [2024-07-15 17:31:14.351542] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:18.646 [2024-07-15 17:31:14.351547] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d948f635180 name raid_bdev1, state configuring 00:12:18.646 request: 00:12:18.646 { 00:12:18.646 "name": "raid_bdev1", 00:12:18.646 "raid_level": "raid1", 00:12:18.646 "base_bdevs": [ 00:12:18.646 "malloc1", 00:12:18.646 "malloc2", 00:12:18.646 "malloc3" 00:12:18.646 ], 00:12:18.646 "superblock": false, 00:12:18.646 "method": "bdev_raid_create", 00:12:18.646 "req_id": 1 00:12:18.646 } 00:12:18.646 Got JSON-RPC error response 00:12:18.646 response: 00:12:18.646 { 00:12:18.646 "code": -17, 00:12:18.646 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:18.646 } 00:12:18.646 17:31:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:12:18.646 17:31:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:18.646 17:31:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:18.646 17:31:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:18.646 17:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:18.646 17:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:12:18.936 17:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:12:18.936 17:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:12:18.936 17:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:19.194 [2024-07-15 17:31:14.826789] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:19.194 [2024-07-15 17:31:14.826855] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.194 [2024-07-15 17:31:14.826884] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d948f634c80 00:12:19.194 [2024-07-15 17:31:14.826892] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.194 [2024-07-15 17:31:14.827593] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.194 [2024-07-15 17:31:14.827623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:19.194 [2024-07-15 17:31:14.827648] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:19.194 [2024-07-15 17:31:14.827660] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:19.194 pt1 00:12:19.194 17:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:19.194 17:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:19.194 17:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:19.194 17:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:19.194 17:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:19.195 17:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:19.195 17:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:19.195 17:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:19.195 17:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:19.195 17:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:19.195 17:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.195 17:31:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:19.461 17:31:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:19.461 "name": "raid_bdev1", 00:12:19.461 "uuid": "068d9200-42d0-11ef-96ac-773515fba644", 00:12:19.461 "strip_size_kb": 0, 00:12:19.461 "state": "configuring", 00:12:19.461 "raid_level": "raid1", 00:12:19.461 "superblock": true, 00:12:19.461 "num_base_bdevs": 3, 00:12:19.461 "num_base_bdevs_discovered": 1, 00:12:19.461 "num_base_bdevs_operational": 3, 00:12:19.461 "base_bdevs_list": [ 00:12:19.461 { 00:12:19.461 "name": "pt1", 00:12:19.461 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:19.461 "is_configured": true, 00:12:19.461 "data_offset": 2048, 00:12:19.461 "data_size": 63488 00:12:19.461 }, 00:12:19.461 { 00:12:19.461 "name": null, 00:12:19.461 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:19.461 "is_configured": false, 00:12:19.461 "data_offset": 2048, 00:12:19.461 "data_size": 63488 00:12:19.461 }, 00:12:19.461 { 00:12:19.461 "name": null, 00:12:19.461 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:19.461 "is_configured": false, 00:12:19.461 "data_offset": 2048, 00:12:19.461 "data_size": 63488 00:12:19.461 } 00:12:19.461 ] 00:12:19.461 }' 00:12:19.461 17:31:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:19.461 17:31:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.718 17:31:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:12:19.718 17:31:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:19.976 [2024-07-15 17:31:15.714845] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:19.976 [2024-07-15 17:31:15.714906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.976 [2024-07-15 17:31:15.714918] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d948f635680 00:12:19.976 [2024-07-15 17:31:15.714926] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.976 [2024-07-15 17:31:15.715046] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.976 [2024-07-15 17:31:15.715058] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:19.976 [2024-07-15 17:31:15.715081] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:19.976 [2024-07-15 17:31:15.715090] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:19.976 pt2 00:12:19.976 17:31:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:20.234 [2024-07-15 17:31:15.990835] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:20.234 17:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:20.234 17:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:20.234 17:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:20.234 17:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:20.234 17:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:20.234 17:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:20.234 17:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:20.234 17:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:20.234 17:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:20.234 17:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:20.234 17:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:20.234 17:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.492 17:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:20.492 "name": "raid_bdev1", 00:12:20.492 "uuid": "068d9200-42d0-11ef-96ac-773515fba644", 00:12:20.492 "strip_size_kb": 0, 00:12:20.492 "state": "configuring", 00:12:20.492 "raid_level": "raid1", 00:12:20.492 "superblock": true, 00:12:20.492 "num_base_bdevs": 3, 00:12:20.492 "num_base_bdevs_discovered": 1, 00:12:20.492 "num_base_bdevs_operational": 3, 00:12:20.492 "base_bdevs_list": [ 00:12:20.492 { 00:12:20.492 "name": "pt1", 00:12:20.492 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:20.492 "is_configured": true, 00:12:20.492 "data_offset": 2048, 00:12:20.492 "data_size": 63488 00:12:20.492 }, 00:12:20.492 { 00:12:20.492 "name": null, 00:12:20.492 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:20.492 "is_configured": false, 00:12:20.492 "data_offset": 2048, 00:12:20.492 "data_size": 63488 00:12:20.492 }, 00:12:20.492 { 00:12:20.492 "name": null, 00:12:20.492 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:20.492 "is_configured": false, 00:12:20.492 "data_offset": 2048, 00:12:20.492 "data_size": 63488 00:12:20.492 } 00:12:20.492 ] 00:12:20.492 }' 00:12:20.492 17:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:20.492 17:31:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.057 17:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:12:21.057 17:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:21.057 17:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:21.057 [2024-07-15 17:31:16.886872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:21.057 [2024-07-15 17:31:16.886924] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.057 [2024-07-15 17:31:16.886937] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d948f635680 00:12:21.057 [2024-07-15 17:31:16.886945] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.057 [2024-07-15 17:31:16.887060] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.057 [2024-07-15 17:31:16.887072] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:21.057 [2024-07-15 17:31:16.887096] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:21.057 [2024-07-15 17:31:16.887104] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:21.315 pt2 00:12:21.315 17:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:12:21.315 17:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:21.315 17:31:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:21.315 [2024-07-15 17:31:17.142861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:21.315 [2024-07-15 17:31:17.142906] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.315 [2024-07-15 17:31:17.142917] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d948f635400 00:12:21.315 [2024-07-15 17:31:17.142925] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.315 [2024-07-15 17:31:17.143037] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.315 [2024-07-15 17:31:17.143050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:21.315 [2024-07-15 17:31:17.143073] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:21.315 [2024-07-15 17:31:17.143081] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:21.315 [2024-07-15 17:31:17.143110] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d948f634780 00:12:21.315 [2024-07-15 17:31:17.143115] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:21.315 [2024-07-15 17:31:17.143137] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d948f697e20 00:12:21.315 [2024-07-15 17:31:17.143195] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d948f634780 00:12:21.315 [2024-07-15 17:31:17.143200] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1d948f634780 00:12:21.315 [2024-07-15 17:31:17.143222] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.573 pt3 00:12:21.573 17:31:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:12:21.573 17:31:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:21.573 17:31:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:21.573 17:31:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:21.573 17:31:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:21.573 17:31:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:21.573 17:31:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:21.573 17:31:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:21.573 17:31:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:21.573 17:31:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:21.573 17:31:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:21.573 17:31:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:21.573 17:31:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.573 17:31:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:21.832 17:31:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:21.832 "name": "raid_bdev1", 00:12:21.832 "uuid": "068d9200-42d0-11ef-96ac-773515fba644", 00:12:21.832 "strip_size_kb": 0, 00:12:21.832 "state": "online", 00:12:21.832 "raid_level": "raid1", 00:12:21.832 "superblock": true, 00:12:21.832 "num_base_bdevs": 3, 00:12:21.832 "num_base_bdevs_discovered": 3, 00:12:21.832 "num_base_bdevs_operational": 3, 00:12:21.832 "base_bdevs_list": [ 00:12:21.832 { 00:12:21.832 "name": "pt1", 00:12:21.832 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:21.832 "is_configured": true, 00:12:21.832 "data_offset": 2048, 00:12:21.832 "data_size": 63488 00:12:21.832 }, 00:12:21.832 { 00:12:21.832 "name": "pt2", 00:12:21.832 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:21.832 "is_configured": true, 00:12:21.832 "data_offset": 2048, 00:12:21.832 "data_size": 63488 00:12:21.832 }, 00:12:21.832 { 00:12:21.832 "name": "pt3", 00:12:21.832 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:21.832 "is_configured": true, 00:12:21.832 "data_offset": 2048, 00:12:21.832 "data_size": 63488 00:12:21.832 } 00:12:21.832 ] 00:12:21.832 }' 00:12:21.832 17:31:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:21.832 17:31:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.090 17:31:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:12:22.090 17:31:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:22.090 17:31:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:22.090 17:31:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:22.090 17:31:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:22.090 17:31:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:22.090 17:31:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:22.090 17:31:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:22.348 [2024-07-15 17:31:18.014922] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:22.348 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:22.348 "name": "raid_bdev1", 00:12:22.348 "aliases": [ 00:12:22.348 "068d9200-42d0-11ef-96ac-773515fba644" 00:12:22.348 ], 00:12:22.348 "product_name": "Raid Volume", 00:12:22.348 "block_size": 512, 00:12:22.348 "num_blocks": 63488, 00:12:22.348 "uuid": "068d9200-42d0-11ef-96ac-773515fba644", 00:12:22.348 "assigned_rate_limits": { 00:12:22.348 "rw_ios_per_sec": 0, 00:12:22.348 "rw_mbytes_per_sec": 0, 00:12:22.348 "r_mbytes_per_sec": 0, 00:12:22.348 "w_mbytes_per_sec": 0 00:12:22.348 }, 00:12:22.348 "claimed": false, 00:12:22.348 "zoned": false, 00:12:22.348 "supported_io_types": { 00:12:22.348 "read": true, 00:12:22.348 "write": true, 00:12:22.348 "unmap": false, 00:12:22.348 "flush": false, 00:12:22.348 "reset": true, 00:12:22.348 "nvme_admin": false, 00:12:22.348 "nvme_io": false, 00:12:22.348 "nvme_io_md": false, 00:12:22.348 "write_zeroes": true, 00:12:22.348 "zcopy": false, 00:12:22.348 "get_zone_info": false, 00:12:22.348 "zone_management": false, 00:12:22.348 "zone_append": false, 00:12:22.348 "compare": false, 00:12:22.348 "compare_and_write": false, 00:12:22.348 "abort": false, 00:12:22.348 "seek_hole": false, 00:12:22.348 "seek_data": false, 00:12:22.348 "copy": false, 00:12:22.348 "nvme_iov_md": false 00:12:22.348 }, 00:12:22.348 "memory_domains": [ 00:12:22.348 { 00:12:22.348 "dma_device_id": "system", 00:12:22.348 "dma_device_type": 1 00:12:22.348 }, 00:12:22.348 { 00:12:22.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.348 "dma_device_type": 2 00:12:22.348 }, 00:12:22.348 { 00:12:22.348 "dma_device_id": "system", 00:12:22.348 "dma_device_type": 1 00:12:22.348 }, 00:12:22.348 { 00:12:22.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.348 "dma_device_type": 2 00:12:22.348 }, 00:12:22.348 { 00:12:22.348 "dma_device_id": "system", 00:12:22.348 "dma_device_type": 1 00:12:22.348 }, 00:12:22.348 { 00:12:22.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.348 "dma_device_type": 2 00:12:22.348 } 00:12:22.348 ], 00:12:22.348 "driver_specific": { 00:12:22.348 "raid": { 00:12:22.348 "uuid": "068d9200-42d0-11ef-96ac-773515fba644", 00:12:22.348 "strip_size_kb": 0, 00:12:22.348 "state": "online", 00:12:22.348 "raid_level": "raid1", 00:12:22.348 "superblock": true, 00:12:22.348 "num_base_bdevs": 3, 00:12:22.348 "num_base_bdevs_discovered": 3, 00:12:22.348 "num_base_bdevs_operational": 3, 00:12:22.348 "base_bdevs_list": [ 00:12:22.348 { 00:12:22.348 "name": "pt1", 00:12:22.348 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:22.348 "is_configured": true, 00:12:22.348 "data_offset": 2048, 00:12:22.349 "data_size": 63488 00:12:22.349 }, 00:12:22.349 { 00:12:22.349 "name": "pt2", 00:12:22.349 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:22.349 "is_configured": true, 00:12:22.349 "data_offset": 2048, 00:12:22.349 "data_size": 63488 00:12:22.349 }, 00:12:22.349 { 00:12:22.349 "name": "pt3", 00:12:22.349 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:22.349 "is_configured": true, 00:12:22.349 "data_offset": 2048, 00:12:22.349 "data_size": 63488 00:12:22.349 } 00:12:22.349 ] 00:12:22.349 } 00:12:22.349 } 00:12:22.349 }' 00:12:22.349 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:22.349 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:22.349 pt2 00:12:22.349 pt3' 00:12:22.349 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:22.349 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:22.349 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:22.607 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:22.607 "name": "pt1", 00:12:22.607 "aliases": [ 00:12:22.607 "00000000-0000-0000-0000-000000000001" 00:12:22.607 ], 00:12:22.607 "product_name": "passthru", 00:12:22.607 "block_size": 512, 00:12:22.607 "num_blocks": 65536, 00:12:22.607 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:22.607 "assigned_rate_limits": { 00:12:22.607 "rw_ios_per_sec": 0, 00:12:22.607 "rw_mbytes_per_sec": 0, 00:12:22.607 "r_mbytes_per_sec": 0, 00:12:22.607 "w_mbytes_per_sec": 0 00:12:22.607 }, 00:12:22.607 "claimed": true, 00:12:22.607 "claim_type": "exclusive_write", 00:12:22.607 "zoned": false, 00:12:22.607 "supported_io_types": { 00:12:22.607 "read": true, 00:12:22.607 "write": true, 00:12:22.607 "unmap": true, 00:12:22.607 "flush": true, 00:12:22.607 "reset": true, 00:12:22.607 "nvme_admin": false, 00:12:22.607 "nvme_io": false, 00:12:22.607 "nvme_io_md": false, 00:12:22.607 "write_zeroes": true, 00:12:22.607 "zcopy": true, 00:12:22.607 "get_zone_info": false, 00:12:22.607 "zone_management": false, 00:12:22.607 "zone_append": false, 00:12:22.607 "compare": false, 00:12:22.607 "compare_and_write": false, 00:12:22.607 "abort": true, 00:12:22.607 "seek_hole": false, 00:12:22.607 "seek_data": false, 00:12:22.607 "copy": true, 00:12:22.607 "nvme_iov_md": false 00:12:22.607 }, 00:12:22.607 "memory_domains": [ 00:12:22.607 { 00:12:22.607 "dma_device_id": "system", 00:12:22.607 "dma_device_type": 1 00:12:22.607 }, 00:12:22.607 { 00:12:22.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.607 "dma_device_type": 2 00:12:22.607 } 00:12:22.607 ], 00:12:22.607 "driver_specific": { 00:12:22.607 "passthru": { 00:12:22.607 "name": "pt1", 00:12:22.607 "base_bdev_name": "malloc1" 00:12:22.607 } 00:12:22.607 } 00:12:22.607 }' 00:12:22.607 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:22.607 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:22.607 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:22.607 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:22.607 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:22.607 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:22.607 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:22.607 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:22.607 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:22.607 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:22.607 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:22.607 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:22.607 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:22.607 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:22.607 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:22.866 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:22.866 "name": "pt2", 00:12:22.866 "aliases": [ 00:12:22.866 "00000000-0000-0000-0000-000000000002" 00:12:22.866 ], 00:12:22.866 "product_name": "passthru", 00:12:22.866 "block_size": 512, 00:12:22.866 "num_blocks": 65536, 00:12:22.866 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:22.866 "assigned_rate_limits": { 00:12:22.866 "rw_ios_per_sec": 0, 00:12:22.866 "rw_mbytes_per_sec": 0, 00:12:22.866 "r_mbytes_per_sec": 0, 00:12:22.866 "w_mbytes_per_sec": 0 00:12:22.866 }, 00:12:22.866 "claimed": true, 00:12:22.866 "claim_type": "exclusive_write", 00:12:22.866 "zoned": false, 00:12:22.866 "supported_io_types": { 00:12:22.866 "read": true, 00:12:22.866 "write": true, 00:12:22.866 "unmap": true, 00:12:22.866 "flush": true, 00:12:22.866 "reset": true, 00:12:22.866 "nvme_admin": false, 00:12:22.866 "nvme_io": false, 00:12:22.866 "nvme_io_md": false, 00:12:22.866 "write_zeroes": true, 00:12:22.866 "zcopy": true, 00:12:22.866 "get_zone_info": false, 00:12:22.866 "zone_management": false, 00:12:22.866 "zone_append": false, 00:12:22.866 "compare": false, 00:12:22.866 "compare_and_write": false, 00:12:22.866 "abort": true, 00:12:22.866 "seek_hole": false, 00:12:22.866 "seek_data": false, 00:12:22.866 "copy": true, 00:12:22.866 "nvme_iov_md": false 00:12:22.866 }, 00:12:22.866 "memory_domains": [ 00:12:22.866 { 00:12:22.866 "dma_device_id": "system", 00:12:22.866 "dma_device_type": 1 00:12:22.866 }, 00:12:22.866 { 00:12:22.866 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.866 "dma_device_type": 2 00:12:22.866 } 00:12:22.866 ], 00:12:22.866 "driver_specific": { 00:12:22.866 "passthru": { 00:12:22.866 "name": "pt2", 00:12:22.866 "base_bdev_name": "malloc2" 00:12:22.866 } 00:12:22.866 } 00:12:22.866 }' 00:12:22.866 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:22.866 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:22.866 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:22.866 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:22.866 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:22.866 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:22.866 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:22.866 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:22.866 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:22.866 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:22.866 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:22.866 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:22.866 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:22.866 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:12:22.866 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:23.125 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:23.125 "name": "pt3", 00:12:23.125 "aliases": [ 00:12:23.125 "00000000-0000-0000-0000-000000000003" 00:12:23.125 ], 00:12:23.125 "product_name": "passthru", 00:12:23.125 "block_size": 512, 00:12:23.125 "num_blocks": 65536, 00:12:23.125 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:23.125 "assigned_rate_limits": { 00:12:23.125 "rw_ios_per_sec": 0, 00:12:23.125 "rw_mbytes_per_sec": 0, 00:12:23.125 "r_mbytes_per_sec": 0, 00:12:23.125 "w_mbytes_per_sec": 0 00:12:23.125 }, 00:12:23.125 "claimed": true, 00:12:23.125 "claim_type": "exclusive_write", 00:12:23.125 "zoned": false, 00:12:23.125 "supported_io_types": { 00:12:23.125 "read": true, 00:12:23.125 "write": true, 00:12:23.125 "unmap": true, 00:12:23.125 "flush": true, 00:12:23.125 "reset": true, 00:12:23.125 "nvme_admin": false, 00:12:23.125 "nvme_io": false, 00:12:23.125 "nvme_io_md": false, 00:12:23.125 "write_zeroes": true, 00:12:23.125 "zcopy": true, 00:12:23.125 "get_zone_info": false, 00:12:23.125 "zone_management": false, 00:12:23.125 "zone_append": false, 00:12:23.125 "compare": false, 00:12:23.125 "compare_and_write": false, 00:12:23.125 "abort": true, 00:12:23.125 "seek_hole": false, 00:12:23.125 "seek_data": false, 00:12:23.125 "copy": true, 00:12:23.125 "nvme_iov_md": false 00:12:23.125 }, 00:12:23.125 "memory_domains": [ 00:12:23.125 { 00:12:23.125 "dma_device_id": "system", 00:12:23.125 "dma_device_type": 1 00:12:23.125 }, 00:12:23.125 { 00:12:23.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.125 "dma_device_type": 2 00:12:23.125 } 00:12:23.125 ], 00:12:23.125 "driver_specific": { 00:12:23.125 "passthru": { 00:12:23.125 "name": "pt3", 00:12:23.125 "base_bdev_name": "malloc3" 00:12:23.125 } 00:12:23.125 } 00:12:23.125 }' 00:12:23.125 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:23.125 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:23.125 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:23.125 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:23.125 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:23.382 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:23.382 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:23.382 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:23.383 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:23.383 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:23.383 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:23.383 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:23.383 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:12:23.383 17:31:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:23.640 [2024-07-15 17:31:19.254965] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:23.640 17:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 068d9200-42d0-11ef-96ac-773515fba644 '!=' 068d9200-42d0-11ef-96ac-773515fba644 ']' 00:12:23.640 17:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:12:23.640 17:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:23.640 17:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:12:23.641 17:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:23.898 [2024-07-15 17:31:19.542931] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:23.898 17:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:23.898 17:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:23.898 17:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:23.898 17:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:23.898 17:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:23.898 17:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:23.898 17:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:23.898 17:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:23.898 17:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:23.898 17:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:23.898 17:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:23.898 17:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.156 17:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:24.156 "name": "raid_bdev1", 00:12:24.156 "uuid": "068d9200-42d0-11ef-96ac-773515fba644", 00:12:24.156 "strip_size_kb": 0, 00:12:24.156 "state": "online", 00:12:24.156 "raid_level": "raid1", 00:12:24.156 "superblock": true, 00:12:24.156 "num_base_bdevs": 3, 00:12:24.156 "num_base_bdevs_discovered": 2, 00:12:24.156 "num_base_bdevs_operational": 2, 00:12:24.156 "base_bdevs_list": [ 00:12:24.156 { 00:12:24.156 "name": null, 00:12:24.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.156 "is_configured": false, 00:12:24.156 "data_offset": 2048, 00:12:24.156 "data_size": 63488 00:12:24.156 }, 00:12:24.156 { 00:12:24.156 "name": "pt2", 00:12:24.156 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:24.156 "is_configured": true, 00:12:24.156 "data_offset": 2048, 00:12:24.156 "data_size": 63488 00:12:24.156 }, 00:12:24.156 { 00:12:24.156 "name": "pt3", 00:12:24.156 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:24.156 "is_configured": true, 00:12:24.156 "data_offset": 2048, 00:12:24.156 "data_size": 63488 00:12:24.156 } 00:12:24.156 ] 00:12:24.156 }' 00:12:24.156 17:31:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:24.156 17:31:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.414 17:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:24.671 [2024-07-15 17:31:20.390942] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:24.671 [2024-07-15 17:31:20.390963] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:24.671 [2024-07-15 17:31:20.390985] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:24.671 [2024-07-15 17:31:20.391000] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:24.671 [2024-07-15 17:31:20.391004] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d948f634780 name raid_bdev1, state offline 00:12:24.671 17:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:24.671 17:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:12:24.928 17:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:12:24.928 17:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:12:24.928 17:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:12:24.928 17:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:12:24.928 17:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:25.186 17:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:12:25.186 17:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:12:25.186 17:31:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:25.480 17:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:12:25.480 17:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:12:25.480 17:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:12:25.480 17:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:12:25.480 17:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:25.764 [2024-07-15 17:31:21.378991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:25.764 [2024-07-15 17:31:21.379042] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.764 [2024-07-15 17:31:21.379054] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d948f635400 00:12:25.764 [2024-07-15 17:31:21.379062] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.764 [2024-07-15 17:31:21.379718] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.764 [2024-07-15 17:31:21.379743] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:25.764 [2024-07-15 17:31:21.379769] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:25.764 [2024-07-15 17:31:21.379781] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:25.764 pt2 00:12:25.764 17:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:25.764 17:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:25.764 17:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:25.764 17:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:25.764 17:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:25.764 17:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:25.764 17:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:25.764 17:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:25.764 17:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:25.764 17:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:25.764 17:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:25.764 17:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.021 17:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:26.021 "name": "raid_bdev1", 00:12:26.021 "uuid": "068d9200-42d0-11ef-96ac-773515fba644", 00:12:26.021 "strip_size_kb": 0, 00:12:26.021 "state": "configuring", 00:12:26.021 "raid_level": "raid1", 00:12:26.021 "superblock": true, 00:12:26.021 "num_base_bdevs": 3, 00:12:26.021 "num_base_bdevs_discovered": 1, 00:12:26.021 "num_base_bdevs_operational": 2, 00:12:26.021 "base_bdevs_list": [ 00:12:26.021 { 00:12:26.021 "name": null, 00:12:26.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.021 "is_configured": false, 00:12:26.021 "data_offset": 2048, 00:12:26.021 "data_size": 63488 00:12:26.021 }, 00:12:26.021 { 00:12:26.021 "name": "pt2", 00:12:26.021 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:26.021 "is_configured": true, 00:12:26.021 "data_offset": 2048, 00:12:26.021 "data_size": 63488 00:12:26.021 }, 00:12:26.021 { 00:12:26.021 "name": null, 00:12:26.021 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:26.021 "is_configured": false, 00:12:26.021 "data_offset": 2048, 00:12:26.021 "data_size": 63488 00:12:26.021 } 00:12:26.021 ] 00:12:26.021 }' 00:12:26.021 17:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:26.021 17:31:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.279 17:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:12:26.279 17:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:12:26.279 17:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:12:26.279 17:31:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:26.537 [2024-07-15 17:31:22.171027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:26.537 [2024-07-15 17:31:22.171093] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.537 [2024-07-15 17:31:22.171122] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d948f634780 00:12:26.537 [2024-07-15 17:31:22.171129] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.537 [2024-07-15 17:31:22.171261] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.537 [2024-07-15 17:31:22.171279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:26.537 [2024-07-15 17:31:22.171304] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:26.537 [2024-07-15 17:31:22.171313] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:26.537 [2024-07-15 17:31:22.171340] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d948f635180 00:12:26.537 [2024-07-15 17:31:22.171344] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:26.537 [2024-07-15 17:31:22.171364] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d948f697e20 00:12:26.537 [2024-07-15 17:31:22.171413] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d948f635180 00:12:26.537 [2024-07-15 17:31:22.171418] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1d948f635180 00:12:26.537 [2024-07-15 17:31:22.171439] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:26.537 pt3 00:12:26.537 17:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:26.537 17:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:26.537 17:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:26.537 17:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:26.537 17:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:26.537 17:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:26.537 17:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:26.537 17:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:26.537 17:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:26.537 17:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:26.537 17:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:26.537 17:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.796 17:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:26.796 "name": "raid_bdev1", 00:12:26.796 "uuid": "068d9200-42d0-11ef-96ac-773515fba644", 00:12:26.796 "strip_size_kb": 0, 00:12:26.796 "state": "online", 00:12:26.796 "raid_level": "raid1", 00:12:26.796 "superblock": true, 00:12:26.796 "num_base_bdevs": 3, 00:12:26.796 "num_base_bdevs_discovered": 2, 00:12:26.796 "num_base_bdevs_operational": 2, 00:12:26.796 "base_bdevs_list": [ 00:12:26.796 { 00:12:26.796 "name": null, 00:12:26.796 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.796 "is_configured": false, 00:12:26.796 "data_offset": 2048, 00:12:26.796 "data_size": 63488 00:12:26.796 }, 00:12:26.796 { 00:12:26.796 "name": "pt2", 00:12:26.796 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:26.796 "is_configured": true, 00:12:26.796 "data_offset": 2048, 00:12:26.796 "data_size": 63488 00:12:26.796 }, 00:12:26.796 { 00:12:26.796 "name": "pt3", 00:12:26.797 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:26.797 "is_configured": true, 00:12:26.797 "data_offset": 2048, 00:12:26.797 "data_size": 63488 00:12:26.797 } 00:12:26.797 ] 00:12:26.797 }' 00:12:26.797 17:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:26.797 17:31:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.055 17:31:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:27.313 [2024-07-15 17:31:23.003068] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:27.313 [2024-07-15 17:31:23.003092] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:27.313 [2024-07-15 17:31:23.003130] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:27.313 [2024-07-15 17:31:23.003144] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:27.313 [2024-07-15 17:31:23.003148] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d948f635180 name raid_bdev1, state offline 00:12:27.313 17:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:27.313 17:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:12:27.575 17:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:12:27.575 17:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:12:27.575 17:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:12:27.575 17:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:12:27.575 17:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:27.833 17:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:28.101 [2024-07-15 17:31:23.751133] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:28.102 [2024-07-15 17:31:23.751204] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.102 [2024-07-15 17:31:23.751232] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d948f634780 00:12:28.102 [2024-07-15 17:31:23.751240] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.102 [2024-07-15 17:31:23.751967] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.102 [2024-07-15 17:31:23.751994] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:28.102 [2024-07-15 17:31:23.752020] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:28.102 [2024-07-15 17:31:23.752032] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:28.102 [2024-07-15 17:31:23.752062] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:28.102 [2024-07-15 17:31:23.752067] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:28.102 [2024-07-15 17:31:23.752072] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d948f635180 name raid_bdev1, state configuring 00:12:28.102 [2024-07-15 17:31:23.752080] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:28.102 pt1 00:12:28.102 17:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:12:28.102 17:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:28.102 17:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:28.102 17:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:28.102 17:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:28.102 17:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:28.102 17:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:28.102 17:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:28.102 17:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:28.102 17:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:28.102 17:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:28.102 17:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:28.102 17:31:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.361 17:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:28.361 "name": "raid_bdev1", 00:12:28.361 "uuid": "068d9200-42d0-11ef-96ac-773515fba644", 00:12:28.361 "strip_size_kb": 0, 00:12:28.361 "state": "configuring", 00:12:28.361 "raid_level": "raid1", 00:12:28.361 "superblock": true, 00:12:28.361 "num_base_bdevs": 3, 00:12:28.361 "num_base_bdevs_discovered": 1, 00:12:28.361 "num_base_bdevs_operational": 2, 00:12:28.361 "base_bdevs_list": [ 00:12:28.361 { 00:12:28.361 "name": null, 00:12:28.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.361 "is_configured": false, 00:12:28.361 "data_offset": 2048, 00:12:28.361 "data_size": 63488 00:12:28.361 }, 00:12:28.361 { 00:12:28.361 "name": "pt2", 00:12:28.361 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:28.361 "is_configured": true, 00:12:28.361 "data_offset": 2048, 00:12:28.361 "data_size": 63488 00:12:28.361 }, 00:12:28.361 { 00:12:28.361 "name": null, 00:12:28.361 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:28.361 "is_configured": false, 00:12:28.361 "data_offset": 2048, 00:12:28.361 "data_size": 63488 00:12:28.361 } 00:12:28.361 ] 00:12:28.361 }' 00:12:28.361 17:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:28.361 17:31:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.620 17:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:12:28.620 17:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:28.880 17:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:12:28.880 17:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:29.180 [2024-07-15 17:31:24.859174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:29.180 [2024-07-15 17:31:24.859229] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:29.180 [2024-07-15 17:31:24.859241] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d948f634c80 00:12:29.180 [2024-07-15 17:31:24.859249] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:29.180 [2024-07-15 17:31:24.859367] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:29.180 [2024-07-15 17:31:24.859378] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:29.180 [2024-07-15 17:31:24.859402] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:29.180 [2024-07-15 17:31:24.859410] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:29.180 [2024-07-15 17:31:24.859438] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1d948f635180 00:12:29.180 [2024-07-15 17:31:24.859442] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:29.180 [2024-07-15 17:31:24.859462] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1d948f697e20 00:12:29.180 [2024-07-15 17:31:24.859510] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1d948f635180 00:12:29.180 [2024-07-15 17:31:24.859515] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1d948f635180 00:12:29.180 [2024-07-15 17:31:24.859536] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:29.180 pt3 00:12:29.180 17:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:29.180 17:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:29.180 17:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:29.180 17:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:29.180 17:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:29.180 17:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:29.180 17:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:29.180 17:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:29.180 17:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:29.180 17:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:29.181 17:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:29.181 17:31:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.438 17:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:29.438 "name": "raid_bdev1", 00:12:29.438 "uuid": "068d9200-42d0-11ef-96ac-773515fba644", 00:12:29.438 "strip_size_kb": 0, 00:12:29.438 "state": "online", 00:12:29.438 "raid_level": "raid1", 00:12:29.438 "superblock": true, 00:12:29.438 "num_base_bdevs": 3, 00:12:29.438 "num_base_bdevs_discovered": 2, 00:12:29.438 "num_base_bdevs_operational": 2, 00:12:29.438 "base_bdevs_list": [ 00:12:29.438 { 00:12:29.438 "name": null, 00:12:29.438 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.438 "is_configured": false, 00:12:29.438 "data_offset": 2048, 00:12:29.438 "data_size": 63488 00:12:29.438 }, 00:12:29.438 { 00:12:29.438 "name": "pt2", 00:12:29.438 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:29.438 "is_configured": true, 00:12:29.438 "data_offset": 2048, 00:12:29.438 "data_size": 63488 00:12:29.438 }, 00:12:29.438 { 00:12:29.438 "name": "pt3", 00:12:29.438 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:29.438 "is_configured": true, 00:12:29.438 "data_offset": 2048, 00:12:29.438 "data_size": 63488 00:12:29.438 } 00:12:29.438 ] 00:12:29.438 }' 00:12:29.438 17:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:29.438 17:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.695 17:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:12:29.695 17:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:29.952 17:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:12:29.952 17:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:29.952 17:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:12:30.210 [2024-07-15 17:31:25.955250] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:30.210 17:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 068d9200-42d0-11ef-96ac-773515fba644 '!=' 068d9200-42d0-11ef-96ac-773515fba644 ']' 00:12:30.210 17:31:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 57549 00:12:30.210 17:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 57549 ']' 00:12:30.210 17:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 57549 00:12:30.210 17:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:12:30.210 17:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:12:30.210 17:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 57549 00:12:30.210 17:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:12:30.210 17:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:12:30.210 killing process with pid 57549 00:12:30.210 17:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:12:30.210 17:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 57549' 00:12:30.210 17:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 57549 00:12:30.210 [2024-07-15 17:31:25.984839] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:30.210 [2024-07-15 17:31:25.984862] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:30.210 [2024-07-15 17:31:25.984877] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:30.210 [2024-07-15 17:31:25.984881] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1d948f635180 name raid_bdev1, state offline 00:12:30.210 17:31:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 57549 00:12:30.210 [2024-07-15 17:31:26.003340] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:30.468 17:31:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:12:30.468 00:12:30.468 real 0m18.834s 00:12:30.468 user 0m34.339s 00:12:30.468 sys 0m2.547s 00:12:30.468 ************************************ 00:12:30.468 END TEST raid_superblock_test 00:12:30.468 ************************************ 00:12:30.468 17:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:30.468 17:31:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.469 17:31:26 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:30.469 17:31:26 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:12:30.469 17:31:26 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:30.469 17:31:26 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:30.469 17:31:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:30.469 ************************************ 00:12:30.469 START TEST raid_read_error_test 00:12:30.469 ************************************ 00:12:30.469 17:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 read 00:12:30.469 17:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:12:30.469 17:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:12:30.469 17:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:12:30.469 17:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:12:30.469 17:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:30.469 17:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:12:30.469 17:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:30.469 17:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:30.469 17:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:12:30.469 17:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:30.469 17:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:30.469 17:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:12:30.469 17:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:30.469 17:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:30.469 17:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:30.469 17:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:12:30.469 17:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:12:30.469 17:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:12:30.469 17:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:12:30.469 17:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:12:30.469 17:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:12:30.469 17:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:12:30.469 17:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:12:30.469 17:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:12:30.469 17:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.39Au9RhAHn 00:12:30.469 17:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=58103 00:12:30.469 17:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 58103 /var/tmp/spdk-raid.sock 00:12:30.469 17:31:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:30.469 17:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 58103 ']' 00:12:30.469 17:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:30.469 17:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:30.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:30.469 17:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:30.469 17:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:30.469 17:31:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.469 [2024-07-15 17:31:26.241914] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:12:30.469 [2024-07-15 17:31:26.242075] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:12:31.035 EAL: TSC is not safe to use in SMP mode 00:12:31.035 EAL: TSC is not invariant 00:12:31.035 [2024-07-15 17:31:26.783706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.294 [2024-07-15 17:31:26.871574] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:12:31.294 [2024-07-15 17:31:26.873802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.294 [2024-07-15 17:31:26.874600] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:31.294 [2024-07-15 17:31:26.874612] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:31.641 17:31:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:31.641 17:31:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:12:31.641 17:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:31.641 17:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:31.927 BaseBdev1_malloc 00:12:31.927 17:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:12:32.185 true 00:12:32.185 17:31:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:32.444 [2024-07-15 17:31:28.122481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:32.444 [2024-07-15 17:31:28.122572] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.444 [2024-07-15 17:31:28.122626] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x315c71034780 00:12:32.444 [2024-07-15 17:31:28.122635] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.444 [2024-07-15 17:31:28.123342] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.444 [2024-07-15 17:31:28.123368] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:32.444 BaseBdev1 00:12:32.444 17:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:32.444 17:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:32.702 BaseBdev2_malloc 00:12:32.702 17:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:12:32.961 true 00:12:32.961 17:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:33.218 [2024-07-15 17:31:28.834502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:33.218 [2024-07-15 17:31:28.834550] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.218 [2024-07-15 17:31:28.834579] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x315c71034c80 00:12:33.218 [2024-07-15 17:31:28.834588] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.218 [2024-07-15 17:31:28.835252] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.218 [2024-07-15 17:31:28.835279] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:33.218 BaseBdev2 00:12:33.218 17:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:33.218 17:31:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:33.475 BaseBdev3_malloc 00:12:33.475 17:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:12:33.733 true 00:12:33.733 17:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:33.991 [2024-07-15 17:31:29.566539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:33.991 [2024-07-15 17:31:29.566594] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.991 [2024-07-15 17:31:29.566620] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x315c71035180 00:12:33.991 [2024-07-15 17:31:29.566629] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.991 [2024-07-15 17:31:29.567365] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.991 [2024-07-15 17:31:29.567390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:33.991 BaseBdev3 00:12:33.991 17:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:12:34.250 [2024-07-15 17:31:29.830596] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:34.250 [2024-07-15 17:31:29.831254] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:34.250 [2024-07-15 17:31:29.831279] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:34.251 [2024-07-15 17:31:29.831341] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x315c71035400 00:12:34.251 [2024-07-15 17:31:29.831347] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:34.251 [2024-07-15 17:31:29.831381] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x315c710a0e20 00:12:34.251 [2024-07-15 17:31:29.831461] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x315c71035400 00:12:34.251 [2024-07-15 17:31:29.831466] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x315c71035400 00:12:34.251 [2024-07-15 17:31:29.831494] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:34.251 17:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:34.251 17:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:34.251 17:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:34.251 17:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:34.251 17:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:34.251 17:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:34.251 17:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:34.251 17:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:34.251 17:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:34.251 17:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:34.251 17:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:34.251 17:31:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.510 17:31:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:34.510 "name": "raid_bdev1", 00:12:34.510 "uuid": "123bab4c-42d0-11ef-96ac-773515fba644", 00:12:34.510 "strip_size_kb": 0, 00:12:34.510 "state": "online", 00:12:34.510 "raid_level": "raid1", 00:12:34.510 "superblock": true, 00:12:34.510 "num_base_bdevs": 3, 00:12:34.510 "num_base_bdevs_discovered": 3, 00:12:34.510 "num_base_bdevs_operational": 3, 00:12:34.510 "base_bdevs_list": [ 00:12:34.510 { 00:12:34.510 "name": "BaseBdev1", 00:12:34.510 "uuid": "4c2b8235-af0e-d253-9694-6c9f3fd992ce", 00:12:34.510 "is_configured": true, 00:12:34.510 "data_offset": 2048, 00:12:34.510 "data_size": 63488 00:12:34.510 }, 00:12:34.510 { 00:12:34.510 "name": "BaseBdev2", 00:12:34.510 "uuid": "bc275fab-bb19-d855-b13b-831487e0d1a2", 00:12:34.510 "is_configured": true, 00:12:34.510 "data_offset": 2048, 00:12:34.510 "data_size": 63488 00:12:34.510 }, 00:12:34.510 { 00:12:34.510 "name": "BaseBdev3", 00:12:34.510 "uuid": "9e6c8c68-8ac4-7850-8834-3fb9d4c93991", 00:12:34.510 "is_configured": true, 00:12:34.510 "data_offset": 2048, 00:12:34.510 "data_size": 63488 00:12:34.510 } 00:12:34.510 ] 00:12:34.510 }' 00:12:34.510 17:31:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:34.510 17:31:30 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.769 17:31:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:34.769 17:31:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:12:34.769 [2024-07-15 17:31:30.562869] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x315c710a0ec0 00:12:35.705 17:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:36.271 17:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:12:36.271 17:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:36.271 17:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:12:36.271 17:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:36.271 17:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:36.271 17:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:36.271 17:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:36.271 17:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:36.271 17:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:36.271 17:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:36.271 17:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:36.271 17:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:36.271 17:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:36.271 17:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:36.271 17:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:36.271 17:31:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:36.271 17:31:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:36.271 "name": "raid_bdev1", 00:12:36.271 "uuid": "123bab4c-42d0-11ef-96ac-773515fba644", 00:12:36.271 "strip_size_kb": 0, 00:12:36.271 "state": "online", 00:12:36.271 "raid_level": "raid1", 00:12:36.271 "superblock": true, 00:12:36.271 "num_base_bdevs": 3, 00:12:36.271 "num_base_bdevs_discovered": 3, 00:12:36.272 "num_base_bdevs_operational": 3, 00:12:36.272 "base_bdevs_list": [ 00:12:36.272 { 00:12:36.272 "name": "BaseBdev1", 00:12:36.272 "uuid": "4c2b8235-af0e-d253-9694-6c9f3fd992ce", 00:12:36.272 "is_configured": true, 00:12:36.272 "data_offset": 2048, 00:12:36.272 "data_size": 63488 00:12:36.272 }, 00:12:36.272 { 00:12:36.272 "name": "BaseBdev2", 00:12:36.272 "uuid": "bc275fab-bb19-d855-b13b-831487e0d1a2", 00:12:36.272 "is_configured": true, 00:12:36.272 "data_offset": 2048, 00:12:36.272 "data_size": 63488 00:12:36.272 }, 00:12:36.272 { 00:12:36.272 "name": "BaseBdev3", 00:12:36.272 "uuid": "9e6c8c68-8ac4-7850-8834-3fb9d4c93991", 00:12:36.272 "is_configured": true, 00:12:36.272 "data_offset": 2048, 00:12:36.272 "data_size": 63488 00:12:36.272 } 00:12:36.272 ] 00:12:36.272 }' 00:12:36.272 17:31:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:36.272 17:31:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.836 17:31:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:37.094 [2024-07-15 17:31:32.706105] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:37.094 [2024-07-15 17:31:32.706132] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:37.094 [2024-07-15 17:31:32.706502] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:37.094 [2024-07-15 17:31:32.706528] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.094 [2024-07-15 17:31:32.706545] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:37.094 [2024-07-15 17:31:32.706550] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x315c71035400 name raid_bdev1, state offline 00:12:37.094 0 00:12:37.094 17:31:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 58103 00:12:37.094 17:31:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 58103 ']' 00:12:37.094 17:31:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 58103 00:12:37.094 17:31:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:12:37.094 17:31:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:12:37.094 17:31:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 58103 00:12:37.094 17:31:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:12:37.094 17:31:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:12:37.094 killing process with pid 58103 00:12:37.094 17:31:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:12:37.094 17:31:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58103' 00:12:37.094 17:31:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 58103 00:12:37.094 [2024-07-15 17:31:32.733839] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:37.094 17:31:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 58103 00:12:37.094 [2024-07-15 17:31:32.751771] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:37.352 17:31:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.39Au9RhAHn 00:12:37.352 17:31:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:12:37.352 17:31:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:12:37.352 17:31:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:12:37.352 17:31:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:12:37.352 17:31:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:37.352 17:31:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:12:37.352 17:31:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:37.352 00:12:37.352 real 0m6.712s 00:12:37.352 user 0m10.579s 00:12:37.352 sys 0m1.112s 00:12:37.352 17:31:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:37.352 17:31:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.352 ************************************ 00:12:37.352 END TEST raid_read_error_test 00:12:37.352 ************************************ 00:12:37.352 17:31:32 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:37.352 17:31:32 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:12:37.352 17:31:32 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:37.352 17:31:32 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:37.352 17:31:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:37.352 ************************************ 00:12:37.352 START TEST raid_write_error_test 00:12:37.352 ************************************ 00:12:37.352 17:31:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 write 00:12:37.352 17:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:12:37.352 17:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:12:37.352 17:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:12:37.352 17:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:12:37.352 17:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:37.352 17:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:12:37.352 17:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:37.352 17:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:37.352 17:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:12:37.352 17:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:37.352 17:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:37.352 17:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:12:37.352 17:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:37.352 17:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:37.352 17:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:37.352 17:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:12:37.352 17:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:12:37.352 17:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:12:37.352 17:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:12:37.352 17:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:12:37.352 17:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:12:37.352 17:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:12:37.352 17:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:12:37.352 17:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:12:37.352 17:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.4EIKdsKndN 00:12:37.352 17:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=58234 00:12:37.352 17:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 58234 /var/tmp/spdk-raid.sock 00:12:37.352 17:31:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 58234 ']' 00:12:37.352 17:31:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:37.352 17:31:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:37.352 17:31:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:37.352 17:31:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:37.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:37.352 17:31:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:37.352 17:31:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.352 [2024-07-15 17:31:32.997800] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:12:37.352 [2024-07-15 17:31:32.998038] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:12:37.918 EAL: TSC is not safe to use in SMP mode 00:12:37.918 EAL: TSC is not invariant 00:12:37.918 [2024-07-15 17:31:33.536877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.918 [2024-07-15 17:31:33.623199] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:12:37.918 [2024-07-15 17:31:33.625588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.918 [2024-07-15 17:31:33.626419] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:37.918 [2024-07-15 17:31:33.626432] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:38.483 17:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:38.483 17:31:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:12:38.483 17:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:38.483 17:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:38.483 BaseBdev1_malloc 00:12:38.483 17:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:12:38.741 true 00:12:38.741 17:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:39.025 [2024-07-15 17:31:34.839057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:39.025 [2024-07-15 17:31:34.839140] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.025 [2024-07-15 17:31:34.839167] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c7533434780 00:12:39.025 [2024-07-15 17:31:34.839175] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.025 [2024-07-15 17:31:34.839852] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.025 [2024-07-15 17:31:34.839881] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:39.285 BaseBdev1 00:12:39.285 17:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:39.285 17:31:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:39.285 BaseBdev2_malloc 00:12:39.285 17:31:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:12:39.853 true 00:12:39.853 17:31:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:39.853 [2024-07-15 17:31:35.643102] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:39.853 [2024-07-15 17:31:35.643167] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.853 [2024-07-15 17:31:35.643207] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c7533434c80 00:12:39.853 [2024-07-15 17:31:35.643216] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.853 [2024-07-15 17:31:35.643943] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.853 [2024-07-15 17:31:35.644001] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:39.853 BaseBdev2 00:12:39.853 17:31:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:39.853 17:31:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:40.111 BaseBdev3_malloc 00:12:40.111 17:31:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:12:40.369 true 00:12:40.369 17:31:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:40.627 [2024-07-15 17:31:36.431141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:40.627 [2024-07-15 17:31:36.431205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:40.627 [2024-07-15 17:31:36.431257] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c7533435180 00:12:40.627 [2024-07-15 17:31:36.431265] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:40.627 [2024-07-15 17:31:36.431961] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:40.627 [2024-07-15 17:31:36.431989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:40.627 BaseBdev3 00:12:40.627 17:31:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:12:40.885 [2024-07-15 17:31:36.667187] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:40.885 [2024-07-15 17:31:36.667776] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:40.885 [2024-07-15 17:31:36.667804] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:40.885 [2024-07-15 17:31:36.667864] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c7533435400 00:12:40.885 [2024-07-15 17:31:36.667871] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:40.885 [2024-07-15 17:31:36.667905] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c75334a0e20 00:12:40.885 [2024-07-15 17:31:36.667984] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c7533435400 00:12:40.885 [2024-07-15 17:31:36.667989] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1c7533435400 00:12:40.885 [2024-07-15 17:31:36.668017] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.885 17:31:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:40.885 17:31:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:40.885 17:31:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:40.885 17:31:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:40.885 17:31:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:40.885 17:31:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:40.885 17:31:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:40.885 17:31:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:40.885 17:31:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:40.885 17:31:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:40.885 17:31:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:40.885 17:31:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.143 17:31:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:41.143 "name": "raid_bdev1", 00:12:41.143 "uuid": "164ed9a6-42d0-11ef-96ac-773515fba644", 00:12:41.143 "strip_size_kb": 0, 00:12:41.143 "state": "online", 00:12:41.143 "raid_level": "raid1", 00:12:41.143 "superblock": true, 00:12:41.143 "num_base_bdevs": 3, 00:12:41.143 "num_base_bdevs_discovered": 3, 00:12:41.143 "num_base_bdevs_operational": 3, 00:12:41.143 "base_bdevs_list": [ 00:12:41.143 { 00:12:41.144 "name": "BaseBdev1", 00:12:41.144 "uuid": "7981c7b4-b6b5-f15a-97d5-0add26029a12", 00:12:41.144 "is_configured": true, 00:12:41.144 "data_offset": 2048, 00:12:41.144 "data_size": 63488 00:12:41.144 }, 00:12:41.144 { 00:12:41.144 "name": "BaseBdev2", 00:12:41.144 "uuid": "b777215d-b327-3754-a4b2-dc11545ccfbc", 00:12:41.144 "is_configured": true, 00:12:41.144 "data_offset": 2048, 00:12:41.144 "data_size": 63488 00:12:41.144 }, 00:12:41.144 { 00:12:41.144 "name": "BaseBdev3", 00:12:41.144 "uuid": "6f7064c6-12d0-d65f-b828-314976f81273", 00:12:41.144 "is_configured": true, 00:12:41.144 "data_offset": 2048, 00:12:41.144 "data_size": 63488 00:12:41.144 } 00:12:41.144 ] 00:12:41.144 }' 00:12:41.144 17:31:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:41.144 17:31:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.710 17:31:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:12:41.710 17:31:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:41.710 [2024-07-15 17:31:37.455403] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c75334a0ec0 00:12:42.645 17:31:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:42.904 [2024-07-15 17:31:38.651639] bdev_raid.c:2222:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:42.904 [2024-07-15 17:31:38.651693] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:42.904 [2024-07-15 17:31:38.651823] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1c75334a0ec0 00:12:42.904 17:31:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:12:42.904 17:31:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:42.904 17:31:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:12:42.904 17:31:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=2 00:12:42.904 17:31:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:42.904 17:31:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:42.904 17:31:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:42.904 17:31:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:42.904 17:31:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:42.904 17:31:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:42.904 17:31:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:42.904 17:31:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:42.904 17:31:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:42.904 17:31:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:42.904 17:31:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:42.904 17:31:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:43.162 17:31:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:43.162 "name": "raid_bdev1", 00:12:43.162 "uuid": "164ed9a6-42d0-11ef-96ac-773515fba644", 00:12:43.162 "strip_size_kb": 0, 00:12:43.162 "state": "online", 00:12:43.163 "raid_level": "raid1", 00:12:43.163 "superblock": true, 00:12:43.163 "num_base_bdevs": 3, 00:12:43.163 "num_base_bdevs_discovered": 2, 00:12:43.163 "num_base_bdevs_operational": 2, 00:12:43.163 "base_bdevs_list": [ 00:12:43.163 { 00:12:43.163 "name": null, 00:12:43.163 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.163 "is_configured": false, 00:12:43.163 "data_offset": 2048, 00:12:43.163 "data_size": 63488 00:12:43.163 }, 00:12:43.163 { 00:12:43.163 "name": "BaseBdev2", 00:12:43.163 "uuid": "b777215d-b327-3754-a4b2-dc11545ccfbc", 00:12:43.163 "is_configured": true, 00:12:43.163 "data_offset": 2048, 00:12:43.163 "data_size": 63488 00:12:43.163 }, 00:12:43.163 { 00:12:43.163 "name": "BaseBdev3", 00:12:43.163 "uuid": "6f7064c6-12d0-d65f-b828-314976f81273", 00:12:43.163 "is_configured": true, 00:12:43.163 "data_offset": 2048, 00:12:43.163 "data_size": 63488 00:12:43.163 } 00:12:43.163 ] 00:12:43.163 }' 00:12:43.163 17:31:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:43.163 17:31:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.729 17:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:43.729 [2024-07-15 17:31:39.543367] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:43.729 [2024-07-15 17:31:39.543395] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:43.729 [2024-07-15 17:31:39.543732] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:43.729 [2024-07-15 17:31:39.543743] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.729 [2024-07-15 17:31:39.543757] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:43.729 [2024-07-15 17:31:39.543761] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c7533435400 name raid_bdev1, state offline 00:12:43.729 0 00:12:43.988 17:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 58234 00:12:43.988 17:31:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 58234 ']' 00:12:43.988 17:31:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 58234 00:12:43.988 17:31:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:12:43.988 17:31:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:12:43.988 17:31:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:12:43.988 17:31:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 58234 00:12:43.988 17:31:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:12:43.988 17:31:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:12:43.988 killing process with pid 58234 00:12:43.988 17:31:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58234' 00:12:43.988 17:31:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 58234 00:12:43.988 [2024-07-15 17:31:39.571003] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:43.988 17:31:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 58234 00:12:43.988 [2024-07-15 17:31:39.588735] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:43.988 17:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.4EIKdsKndN 00:12:43.988 17:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:12:43.988 17:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:12:43.988 17:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:12:43.988 17:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:12:43.988 17:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:43.988 17:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:12:43.988 17:31:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:43.988 00:12:43.988 real 0m6.805s 00:12:43.988 user 0m10.699s 00:12:43.988 sys 0m1.168s 00:12:43.988 17:31:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:43.988 17:31:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.988 ************************************ 00:12:43.988 END TEST raid_write_error_test 00:12:43.988 ************************************ 00:12:44.246 17:31:39 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:44.246 17:31:39 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:12:44.246 17:31:39 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:12:44.246 17:31:39 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:12:44.246 17:31:39 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:44.246 17:31:39 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:44.246 17:31:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:44.246 ************************************ 00:12:44.246 START TEST raid_state_function_test 00:12:44.246 ************************************ 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 false 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=58367 00:12:44.246 Process raid pid: 58367 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 58367' 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 58367 /var/tmp/spdk-raid.sock 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 58367 ']' 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:44.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:44.246 17:31:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.246 [2024-07-15 17:31:39.846425] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:12:44.246 [2024-07-15 17:31:39.846657] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:12:44.812 EAL: TSC is not safe to use in SMP mode 00:12:44.812 EAL: TSC is not invariant 00:12:44.812 [2024-07-15 17:31:40.418217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.812 [2024-07-15 17:31:40.519794] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:12:44.812 [2024-07-15 17:31:40.522283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:44.812 [2024-07-15 17:31:40.523175] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:44.812 [2024-07-15 17:31:40.523194] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:45.378 17:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:45.378 17:31:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:12:45.378 17:31:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:45.378 [2024-07-15 17:31:41.176893] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:45.378 [2024-07-15 17:31:41.176948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:45.378 [2024-07-15 17:31:41.176954] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:45.378 [2024-07-15 17:31:41.176963] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:45.378 [2024-07-15 17:31:41.176967] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:45.378 [2024-07-15 17:31:41.176974] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:45.378 [2024-07-15 17:31:41.176978] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:45.378 [2024-07-15 17:31:41.176985] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:45.378 17:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:45.378 17:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:45.378 17:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:45.378 17:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:45.378 17:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:45.378 17:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:45.378 17:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:45.378 17:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:45.378 17:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:45.378 17:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:45.378 17:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:45.378 17:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.943 17:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:45.943 "name": "Existed_Raid", 00:12:45.943 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.943 "strip_size_kb": 64, 00:12:45.943 "state": "configuring", 00:12:45.944 "raid_level": "raid0", 00:12:45.944 "superblock": false, 00:12:45.944 "num_base_bdevs": 4, 00:12:45.944 "num_base_bdevs_discovered": 0, 00:12:45.944 "num_base_bdevs_operational": 4, 00:12:45.944 "base_bdevs_list": [ 00:12:45.944 { 00:12:45.944 "name": "BaseBdev1", 00:12:45.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.944 "is_configured": false, 00:12:45.944 "data_offset": 0, 00:12:45.944 "data_size": 0 00:12:45.944 }, 00:12:45.944 { 00:12:45.944 "name": "BaseBdev2", 00:12:45.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.944 "is_configured": false, 00:12:45.944 "data_offset": 0, 00:12:45.944 "data_size": 0 00:12:45.944 }, 00:12:45.944 { 00:12:45.944 "name": "BaseBdev3", 00:12:45.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.944 "is_configured": false, 00:12:45.944 "data_offset": 0, 00:12:45.944 "data_size": 0 00:12:45.944 }, 00:12:45.944 { 00:12:45.944 "name": "BaseBdev4", 00:12:45.944 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.944 "is_configured": false, 00:12:45.944 "data_offset": 0, 00:12:45.944 "data_size": 0 00:12:45.944 } 00:12:45.944 ] 00:12:45.944 }' 00:12:45.944 17:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:45.944 17:31:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.944 17:31:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:46.201 [2024-07-15 17:31:41.992891] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:46.201 [2024-07-15 17:31:41.992924] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x301c4834500 name Existed_Raid, state configuring 00:12:46.201 17:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:46.459 [2024-07-15 17:31:42.252903] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:46.459 [2024-07-15 17:31:42.252955] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:46.459 [2024-07-15 17:31:42.252961] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:46.459 [2024-07-15 17:31:42.252970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:46.459 [2024-07-15 17:31:42.252974] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:46.459 [2024-07-15 17:31:42.252982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:46.459 [2024-07-15 17:31:42.252985] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:46.459 [2024-07-15 17:31:42.252992] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:46.459 17:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:46.717 [2024-07-15 17:31:42.493977] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:46.717 BaseBdev1 00:12:46.717 17:31:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:46.717 17:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:12:46.717 17:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:46.717 17:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:46.717 17:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:46.717 17:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:46.717 17:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:46.975 17:31:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:47.233 [ 00:12:47.233 { 00:12:47.233 "name": "BaseBdev1", 00:12:47.233 "aliases": [ 00:12:47.233 "19c7ca0c-42d0-11ef-96ac-773515fba644" 00:12:47.233 ], 00:12:47.233 "product_name": "Malloc disk", 00:12:47.233 "block_size": 512, 00:12:47.233 "num_blocks": 65536, 00:12:47.233 "uuid": "19c7ca0c-42d0-11ef-96ac-773515fba644", 00:12:47.233 "assigned_rate_limits": { 00:12:47.233 "rw_ios_per_sec": 0, 00:12:47.233 "rw_mbytes_per_sec": 0, 00:12:47.233 "r_mbytes_per_sec": 0, 00:12:47.233 "w_mbytes_per_sec": 0 00:12:47.233 }, 00:12:47.233 "claimed": true, 00:12:47.233 "claim_type": "exclusive_write", 00:12:47.233 "zoned": false, 00:12:47.233 "supported_io_types": { 00:12:47.233 "read": true, 00:12:47.233 "write": true, 00:12:47.233 "unmap": true, 00:12:47.233 "flush": true, 00:12:47.233 "reset": true, 00:12:47.233 "nvme_admin": false, 00:12:47.233 "nvme_io": false, 00:12:47.233 "nvme_io_md": false, 00:12:47.233 "write_zeroes": true, 00:12:47.233 "zcopy": true, 00:12:47.233 "get_zone_info": false, 00:12:47.233 "zone_management": false, 00:12:47.233 "zone_append": false, 00:12:47.233 "compare": false, 00:12:47.233 "compare_and_write": false, 00:12:47.233 "abort": true, 00:12:47.233 "seek_hole": false, 00:12:47.233 "seek_data": false, 00:12:47.233 "copy": true, 00:12:47.233 "nvme_iov_md": false 00:12:47.233 }, 00:12:47.233 "memory_domains": [ 00:12:47.233 { 00:12:47.233 "dma_device_id": "system", 00:12:47.233 "dma_device_type": 1 00:12:47.233 }, 00:12:47.233 { 00:12:47.233 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.233 "dma_device_type": 2 00:12:47.233 } 00:12:47.233 ], 00:12:47.233 "driver_specific": {} 00:12:47.233 } 00:12:47.233 ] 00:12:47.233 17:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:47.233 17:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:47.233 17:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:47.233 17:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:47.233 17:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:47.233 17:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:47.233 17:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:47.233 17:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:47.233 17:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:47.233 17:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:47.233 17:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:47.233 17:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:47.233 17:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.492 17:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:47.492 "name": "Existed_Raid", 00:12:47.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.492 "strip_size_kb": 64, 00:12:47.492 "state": "configuring", 00:12:47.492 "raid_level": "raid0", 00:12:47.492 "superblock": false, 00:12:47.492 "num_base_bdevs": 4, 00:12:47.492 "num_base_bdevs_discovered": 1, 00:12:47.492 "num_base_bdevs_operational": 4, 00:12:47.492 "base_bdevs_list": [ 00:12:47.492 { 00:12:47.492 "name": "BaseBdev1", 00:12:47.492 "uuid": "19c7ca0c-42d0-11ef-96ac-773515fba644", 00:12:47.492 "is_configured": true, 00:12:47.492 "data_offset": 0, 00:12:47.492 "data_size": 65536 00:12:47.492 }, 00:12:47.492 { 00:12:47.492 "name": "BaseBdev2", 00:12:47.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.492 "is_configured": false, 00:12:47.492 "data_offset": 0, 00:12:47.492 "data_size": 0 00:12:47.492 }, 00:12:47.492 { 00:12:47.492 "name": "BaseBdev3", 00:12:47.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.492 "is_configured": false, 00:12:47.492 "data_offset": 0, 00:12:47.492 "data_size": 0 00:12:47.492 }, 00:12:47.492 { 00:12:47.492 "name": "BaseBdev4", 00:12:47.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:47.492 "is_configured": false, 00:12:47.492 "data_offset": 0, 00:12:47.492 "data_size": 0 00:12:47.492 } 00:12:47.492 ] 00:12:47.492 }' 00:12:47.492 17:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:47.492 17:31:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.103 17:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:48.103 [2024-07-15 17:31:43.880957] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:48.103 [2024-07-15 17:31:43.880988] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x301c4834500 name Existed_Raid, state configuring 00:12:48.103 17:31:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:48.361 [2024-07-15 17:31:44.116986] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:48.362 [2024-07-15 17:31:44.117898] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:48.362 [2024-07-15 17:31:44.117952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:48.362 [2024-07-15 17:31:44.117957] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:48.362 [2024-07-15 17:31:44.117981] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:48.362 [2024-07-15 17:31:44.117984] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:48.362 [2024-07-15 17:31:44.117991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:48.362 17:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:12:48.362 17:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:48.362 17:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:48.362 17:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:48.362 17:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:48.362 17:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:48.362 17:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:48.362 17:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:48.362 17:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:48.362 17:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:48.362 17:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:48.362 17:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:48.362 17:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:48.362 17:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:48.621 17:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:48.621 "name": "Existed_Raid", 00:12:48.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.621 "strip_size_kb": 64, 00:12:48.621 "state": "configuring", 00:12:48.621 "raid_level": "raid0", 00:12:48.621 "superblock": false, 00:12:48.621 "num_base_bdevs": 4, 00:12:48.621 "num_base_bdevs_discovered": 1, 00:12:48.621 "num_base_bdevs_operational": 4, 00:12:48.621 "base_bdevs_list": [ 00:12:48.621 { 00:12:48.621 "name": "BaseBdev1", 00:12:48.621 "uuid": "19c7ca0c-42d0-11ef-96ac-773515fba644", 00:12:48.621 "is_configured": true, 00:12:48.621 "data_offset": 0, 00:12:48.621 "data_size": 65536 00:12:48.621 }, 00:12:48.621 { 00:12:48.621 "name": "BaseBdev2", 00:12:48.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.621 "is_configured": false, 00:12:48.621 "data_offset": 0, 00:12:48.621 "data_size": 0 00:12:48.621 }, 00:12:48.621 { 00:12:48.621 "name": "BaseBdev3", 00:12:48.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.621 "is_configured": false, 00:12:48.621 "data_offset": 0, 00:12:48.621 "data_size": 0 00:12:48.621 }, 00:12:48.621 { 00:12:48.621 "name": "BaseBdev4", 00:12:48.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.621 "is_configured": false, 00:12:48.621 "data_offset": 0, 00:12:48.621 "data_size": 0 00:12:48.621 } 00:12:48.621 ] 00:12:48.621 }' 00:12:48.621 17:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:48.621 17:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.187 17:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:49.187 [2024-07-15 17:31:44.945158] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:49.187 BaseBdev2 00:12:49.187 17:31:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:49.187 17:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:12:49.187 17:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:49.187 17:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:49.187 17:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:49.187 17:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:49.187 17:31:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:49.446 17:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:49.704 [ 00:12:49.704 { 00:12:49.704 "name": "BaseBdev2", 00:12:49.704 "aliases": [ 00:12:49.704 "1b3df2bc-42d0-11ef-96ac-773515fba644" 00:12:49.704 ], 00:12:49.704 "product_name": "Malloc disk", 00:12:49.704 "block_size": 512, 00:12:49.704 "num_blocks": 65536, 00:12:49.704 "uuid": "1b3df2bc-42d0-11ef-96ac-773515fba644", 00:12:49.704 "assigned_rate_limits": { 00:12:49.704 "rw_ios_per_sec": 0, 00:12:49.704 "rw_mbytes_per_sec": 0, 00:12:49.704 "r_mbytes_per_sec": 0, 00:12:49.704 "w_mbytes_per_sec": 0 00:12:49.704 }, 00:12:49.704 "claimed": true, 00:12:49.704 "claim_type": "exclusive_write", 00:12:49.704 "zoned": false, 00:12:49.704 "supported_io_types": { 00:12:49.704 "read": true, 00:12:49.704 "write": true, 00:12:49.704 "unmap": true, 00:12:49.704 "flush": true, 00:12:49.704 "reset": true, 00:12:49.704 "nvme_admin": false, 00:12:49.704 "nvme_io": false, 00:12:49.704 "nvme_io_md": false, 00:12:49.704 "write_zeroes": true, 00:12:49.704 "zcopy": true, 00:12:49.704 "get_zone_info": false, 00:12:49.704 "zone_management": false, 00:12:49.704 "zone_append": false, 00:12:49.704 "compare": false, 00:12:49.704 "compare_and_write": false, 00:12:49.704 "abort": true, 00:12:49.704 "seek_hole": false, 00:12:49.704 "seek_data": false, 00:12:49.704 "copy": true, 00:12:49.704 "nvme_iov_md": false 00:12:49.704 }, 00:12:49.704 "memory_domains": [ 00:12:49.704 { 00:12:49.704 "dma_device_id": "system", 00:12:49.704 "dma_device_type": 1 00:12:49.704 }, 00:12:49.704 { 00:12:49.704 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.704 "dma_device_type": 2 00:12:49.704 } 00:12:49.704 ], 00:12:49.704 "driver_specific": {} 00:12:49.704 } 00:12:49.704 ] 00:12:49.704 17:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:49.704 17:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:49.704 17:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:49.704 17:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:49.704 17:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:49.704 17:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:49.704 17:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:49.704 17:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:49.704 17:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:49.704 17:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:49.704 17:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:49.704 17:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:49.704 17:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:49.704 17:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:49.704 17:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:49.962 17:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:49.962 "name": "Existed_Raid", 00:12:49.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.962 "strip_size_kb": 64, 00:12:49.962 "state": "configuring", 00:12:49.962 "raid_level": "raid0", 00:12:49.962 "superblock": false, 00:12:49.962 "num_base_bdevs": 4, 00:12:49.962 "num_base_bdevs_discovered": 2, 00:12:49.962 "num_base_bdevs_operational": 4, 00:12:49.962 "base_bdevs_list": [ 00:12:49.962 { 00:12:49.962 "name": "BaseBdev1", 00:12:49.962 "uuid": "19c7ca0c-42d0-11ef-96ac-773515fba644", 00:12:49.962 "is_configured": true, 00:12:49.962 "data_offset": 0, 00:12:49.962 "data_size": 65536 00:12:49.962 }, 00:12:49.962 { 00:12:49.962 "name": "BaseBdev2", 00:12:49.962 "uuid": "1b3df2bc-42d0-11ef-96ac-773515fba644", 00:12:49.962 "is_configured": true, 00:12:49.962 "data_offset": 0, 00:12:49.962 "data_size": 65536 00:12:49.962 }, 00:12:49.962 { 00:12:49.962 "name": "BaseBdev3", 00:12:49.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.962 "is_configured": false, 00:12:49.962 "data_offset": 0, 00:12:49.962 "data_size": 0 00:12:49.962 }, 00:12:49.962 { 00:12:49.962 "name": "BaseBdev4", 00:12:49.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:49.962 "is_configured": false, 00:12:49.962 "data_offset": 0, 00:12:49.962 "data_size": 0 00:12:49.962 } 00:12:49.962 ] 00:12:49.962 }' 00:12:49.962 17:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:49.962 17:31:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.220 17:31:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:50.478 [2024-07-15 17:31:46.221295] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:50.478 BaseBdev3 00:12:50.478 17:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:12:50.478 17:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:12:50.478 17:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:50.478 17:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:50.478 17:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:50.478 17:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:50.478 17:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:50.737 17:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:50.995 [ 00:12:50.995 { 00:12:50.995 "name": "BaseBdev3", 00:12:50.995 "aliases": [ 00:12:50.995 "1c00aca4-42d0-11ef-96ac-773515fba644" 00:12:50.995 ], 00:12:50.995 "product_name": "Malloc disk", 00:12:50.995 "block_size": 512, 00:12:50.995 "num_blocks": 65536, 00:12:50.995 "uuid": "1c00aca4-42d0-11ef-96ac-773515fba644", 00:12:50.995 "assigned_rate_limits": { 00:12:50.995 "rw_ios_per_sec": 0, 00:12:50.995 "rw_mbytes_per_sec": 0, 00:12:50.995 "r_mbytes_per_sec": 0, 00:12:50.995 "w_mbytes_per_sec": 0 00:12:50.995 }, 00:12:50.995 "claimed": true, 00:12:50.995 "claim_type": "exclusive_write", 00:12:50.995 "zoned": false, 00:12:50.995 "supported_io_types": { 00:12:50.995 "read": true, 00:12:50.995 "write": true, 00:12:50.995 "unmap": true, 00:12:50.995 "flush": true, 00:12:50.995 "reset": true, 00:12:50.995 "nvme_admin": false, 00:12:50.995 "nvme_io": false, 00:12:50.995 "nvme_io_md": false, 00:12:50.995 "write_zeroes": true, 00:12:50.995 "zcopy": true, 00:12:50.995 "get_zone_info": false, 00:12:50.995 "zone_management": false, 00:12:50.995 "zone_append": false, 00:12:50.995 "compare": false, 00:12:50.995 "compare_and_write": false, 00:12:50.995 "abort": true, 00:12:50.995 "seek_hole": false, 00:12:50.995 "seek_data": false, 00:12:50.995 "copy": true, 00:12:50.995 "nvme_iov_md": false 00:12:50.995 }, 00:12:50.995 "memory_domains": [ 00:12:50.995 { 00:12:50.995 "dma_device_id": "system", 00:12:50.995 "dma_device_type": 1 00:12:50.995 }, 00:12:50.995 { 00:12:50.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:50.995 "dma_device_type": 2 00:12:50.995 } 00:12:50.995 ], 00:12:50.995 "driver_specific": {} 00:12:50.995 } 00:12:50.995 ] 00:12:50.995 17:31:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:50.995 17:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:50.995 17:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:50.995 17:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:50.995 17:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:50.995 17:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:50.995 17:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:50.995 17:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:50.995 17:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:50.995 17:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:50.995 17:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:50.995 17:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:50.995 17:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:50.995 17:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:50.995 17:31:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.254 17:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:51.254 "name": "Existed_Raid", 00:12:51.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.254 "strip_size_kb": 64, 00:12:51.254 "state": "configuring", 00:12:51.254 "raid_level": "raid0", 00:12:51.254 "superblock": false, 00:12:51.254 "num_base_bdevs": 4, 00:12:51.254 "num_base_bdevs_discovered": 3, 00:12:51.254 "num_base_bdevs_operational": 4, 00:12:51.254 "base_bdevs_list": [ 00:12:51.254 { 00:12:51.254 "name": "BaseBdev1", 00:12:51.254 "uuid": "19c7ca0c-42d0-11ef-96ac-773515fba644", 00:12:51.254 "is_configured": true, 00:12:51.254 "data_offset": 0, 00:12:51.254 "data_size": 65536 00:12:51.254 }, 00:12:51.254 { 00:12:51.254 "name": "BaseBdev2", 00:12:51.254 "uuid": "1b3df2bc-42d0-11ef-96ac-773515fba644", 00:12:51.254 "is_configured": true, 00:12:51.254 "data_offset": 0, 00:12:51.254 "data_size": 65536 00:12:51.254 }, 00:12:51.254 { 00:12:51.254 "name": "BaseBdev3", 00:12:51.254 "uuid": "1c00aca4-42d0-11ef-96ac-773515fba644", 00:12:51.254 "is_configured": true, 00:12:51.254 "data_offset": 0, 00:12:51.254 "data_size": 65536 00:12:51.254 }, 00:12:51.254 { 00:12:51.254 "name": "BaseBdev4", 00:12:51.254 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.254 "is_configured": false, 00:12:51.254 "data_offset": 0, 00:12:51.254 "data_size": 0 00:12:51.254 } 00:12:51.254 ] 00:12:51.254 }' 00:12:51.254 17:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:51.254 17:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.513 17:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:12:51.801 [2024-07-15 17:31:47.617378] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:51.801 [2024-07-15 17:31:47.617410] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x301c4834a00 00:12:51.801 [2024-07-15 17:31:47.617415] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:51.801 [2024-07-15 17:31:47.617447] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x301c4897e20 00:12:51.801 [2024-07-15 17:31:47.617540] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x301c4834a00 00:12:51.801 [2024-07-15 17:31:47.617544] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x301c4834a00 00:12:51.801 [2024-07-15 17:31:47.617583] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.801 BaseBdev4 00:12:52.058 17:31:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:12:52.058 17:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:12:52.058 17:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:52.058 17:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:52.058 17:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:52.058 17:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:52.058 17:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:52.316 17:31:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:52.574 [ 00:12:52.574 { 00:12:52.574 "name": "BaseBdev4", 00:12:52.574 "aliases": [ 00:12:52.574 "1cd5b1f8-42d0-11ef-96ac-773515fba644" 00:12:52.574 ], 00:12:52.574 "product_name": "Malloc disk", 00:12:52.574 "block_size": 512, 00:12:52.574 "num_blocks": 65536, 00:12:52.574 "uuid": "1cd5b1f8-42d0-11ef-96ac-773515fba644", 00:12:52.574 "assigned_rate_limits": { 00:12:52.574 "rw_ios_per_sec": 0, 00:12:52.574 "rw_mbytes_per_sec": 0, 00:12:52.574 "r_mbytes_per_sec": 0, 00:12:52.574 "w_mbytes_per_sec": 0 00:12:52.574 }, 00:12:52.574 "claimed": true, 00:12:52.574 "claim_type": "exclusive_write", 00:12:52.574 "zoned": false, 00:12:52.574 "supported_io_types": { 00:12:52.574 "read": true, 00:12:52.574 "write": true, 00:12:52.574 "unmap": true, 00:12:52.574 "flush": true, 00:12:52.574 "reset": true, 00:12:52.574 "nvme_admin": false, 00:12:52.574 "nvme_io": false, 00:12:52.574 "nvme_io_md": false, 00:12:52.574 "write_zeroes": true, 00:12:52.574 "zcopy": true, 00:12:52.574 "get_zone_info": false, 00:12:52.574 "zone_management": false, 00:12:52.574 "zone_append": false, 00:12:52.574 "compare": false, 00:12:52.574 "compare_and_write": false, 00:12:52.574 "abort": true, 00:12:52.574 "seek_hole": false, 00:12:52.574 "seek_data": false, 00:12:52.574 "copy": true, 00:12:52.574 "nvme_iov_md": false 00:12:52.574 }, 00:12:52.574 "memory_domains": [ 00:12:52.574 { 00:12:52.574 "dma_device_id": "system", 00:12:52.574 "dma_device_type": 1 00:12:52.574 }, 00:12:52.574 { 00:12:52.574 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.574 "dma_device_type": 2 00:12:52.574 } 00:12:52.574 ], 00:12:52.574 "driver_specific": {} 00:12:52.574 } 00:12:52.574 ] 00:12:52.574 17:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:52.574 17:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:52.574 17:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:52.574 17:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:52.574 17:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:52.574 17:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:52.574 17:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:52.574 17:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:52.574 17:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:52.574 17:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:52.574 17:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:52.574 17:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:52.574 17:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:52.574 17:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:52.574 17:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.832 17:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:52.832 "name": "Existed_Raid", 00:12:52.832 "uuid": "1cd5b9d2-42d0-11ef-96ac-773515fba644", 00:12:52.832 "strip_size_kb": 64, 00:12:52.832 "state": "online", 00:12:52.832 "raid_level": "raid0", 00:12:52.832 "superblock": false, 00:12:52.832 "num_base_bdevs": 4, 00:12:52.832 "num_base_bdevs_discovered": 4, 00:12:52.832 "num_base_bdevs_operational": 4, 00:12:52.832 "base_bdevs_list": [ 00:12:52.832 { 00:12:52.832 "name": "BaseBdev1", 00:12:52.832 "uuid": "19c7ca0c-42d0-11ef-96ac-773515fba644", 00:12:52.832 "is_configured": true, 00:12:52.832 "data_offset": 0, 00:12:52.832 "data_size": 65536 00:12:52.833 }, 00:12:52.833 { 00:12:52.833 "name": "BaseBdev2", 00:12:52.833 "uuid": "1b3df2bc-42d0-11ef-96ac-773515fba644", 00:12:52.833 "is_configured": true, 00:12:52.833 "data_offset": 0, 00:12:52.833 "data_size": 65536 00:12:52.833 }, 00:12:52.833 { 00:12:52.833 "name": "BaseBdev3", 00:12:52.833 "uuid": "1c00aca4-42d0-11ef-96ac-773515fba644", 00:12:52.833 "is_configured": true, 00:12:52.833 "data_offset": 0, 00:12:52.833 "data_size": 65536 00:12:52.833 }, 00:12:52.833 { 00:12:52.833 "name": "BaseBdev4", 00:12:52.833 "uuid": "1cd5b1f8-42d0-11ef-96ac-773515fba644", 00:12:52.833 "is_configured": true, 00:12:52.833 "data_offset": 0, 00:12:52.833 "data_size": 65536 00:12:52.833 } 00:12:52.833 ] 00:12:52.833 }' 00:12:52.833 17:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:52.833 17:31:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.399 17:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:53.399 17:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:53.399 17:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:53.399 17:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:53.399 17:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:53.399 17:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:53.399 17:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:53.400 17:31:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:53.400 [2024-07-15 17:31:49.221340] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:53.658 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:53.658 "name": "Existed_Raid", 00:12:53.658 "aliases": [ 00:12:53.658 "1cd5b9d2-42d0-11ef-96ac-773515fba644" 00:12:53.658 ], 00:12:53.658 "product_name": "Raid Volume", 00:12:53.658 "block_size": 512, 00:12:53.658 "num_blocks": 262144, 00:12:53.658 "uuid": "1cd5b9d2-42d0-11ef-96ac-773515fba644", 00:12:53.658 "assigned_rate_limits": { 00:12:53.658 "rw_ios_per_sec": 0, 00:12:53.658 "rw_mbytes_per_sec": 0, 00:12:53.658 "r_mbytes_per_sec": 0, 00:12:53.658 "w_mbytes_per_sec": 0 00:12:53.658 }, 00:12:53.658 "claimed": false, 00:12:53.658 "zoned": false, 00:12:53.658 "supported_io_types": { 00:12:53.658 "read": true, 00:12:53.658 "write": true, 00:12:53.658 "unmap": true, 00:12:53.658 "flush": true, 00:12:53.658 "reset": true, 00:12:53.658 "nvme_admin": false, 00:12:53.658 "nvme_io": false, 00:12:53.658 "nvme_io_md": false, 00:12:53.658 "write_zeroes": true, 00:12:53.658 "zcopy": false, 00:12:53.658 "get_zone_info": false, 00:12:53.658 "zone_management": false, 00:12:53.658 "zone_append": false, 00:12:53.658 "compare": false, 00:12:53.658 "compare_and_write": false, 00:12:53.658 "abort": false, 00:12:53.658 "seek_hole": false, 00:12:53.658 "seek_data": false, 00:12:53.658 "copy": false, 00:12:53.658 "nvme_iov_md": false 00:12:53.658 }, 00:12:53.658 "memory_domains": [ 00:12:53.658 { 00:12:53.658 "dma_device_id": "system", 00:12:53.658 "dma_device_type": 1 00:12:53.658 }, 00:12:53.658 { 00:12:53.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.658 "dma_device_type": 2 00:12:53.658 }, 00:12:53.658 { 00:12:53.658 "dma_device_id": "system", 00:12:53.658 "dma_device_type": 1 00:12:53.658 }, 00:12:53.658 { 00:12:53.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.658 "dma_device_type": 2 00:12:53.658 }, 00:12:53.658 { 00:12:53.658 "dma_device_id": "system", 00:12:53.658 "dma_device_type": 1 00:12:53.658 }, 00:12:53.658 { 00:12:53.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.658 "dma_device_type": 2 00:12:53.658 }, 00:12:53.658 { 00:12:53.658 "dma_device_id": "system", 00:12:53.658 "dma_device_type": 1 00:12:53.658 }, 00:12:53.658 { 00:12:53.658 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.658 "dma_device_type": 2 00:12:53.658 } 00:12:53.658 ], 00:12:53.658 "driver_specific": { 00:12:53.658 "raid": { 00:12:53.658 "uuid": "1cd5b9d2-42d0-11ef-96ac-773515fba644", 00:12:53.658 "strip_size_kb": 64, 00:12:53.658 "state": "online", 00:12:53.658 "raid_level": "raid0", 00:12:53.658 "superblock": false, 00:12:53.658 "num_base_bdevs": 4, 00:12:53.658 "num_base_bdevs_discovered": 4, 00:12:53.658 "num_base_bdevs_operational": 4, 00:12:53.658 "base_bdevs_list": [ 00:12:53.658 { 00:12:53.658 "name": "BaseBdev1", 00:12:53.658 "uuid": "19c7ca0c-42d0-11ef-96ac-773515fba644", 00:12:53.658 "is_configured": true, 00:12:53.658 "data_offset": 0, 00:12:53.658 "data_size": 65536 00:12:53.658 }, 00:12:53.658 { 00:12:53.658 "name": "BaseBdev2", 00:12:53.658 "uuid": "1b3df2bc-42d0-11ef-96ac-773515fba644", 00:12:53.658 "is_configured": true, 00:12:53.658 "data_offset": 0, 00:12:53.658 "data_size": 65536 00:12:53.658 }, 00:12:53.658 { 00:12:53.658 "name": "BaseBdev3", 00:12:53.658 "uuid": "1c00aca4-42d0-11ef-96ac-773515fba644", 00:12:53.658 "is_configured": true, 00:12:53.658 "data_offset": 0, 00:12:53.658 "data_size": 65536 00:12:53.658 }, 00:12:53.658 { 00:12:53.658 "name": "BaseBdev4", 00:12:53.658 "uuid": "1cd5b1f8-42d0-11ef-96ac-773515fba644", 00:12:53.658 "is_configured": true, 00:12:53.658 "data_offset": 0, 00:12:53.658 "data_size": 65536 00:12:53.658 } 00:12:53.658 ] 00:12:53.658 } 00:12:53.658 } 00:12:53.658 }' 00:12:53.658 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:53.658 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:53.658 BaseBdev2 00:12:53.658 BaseBdev3 00:12:53.658 BaseBdev4' 00:12:53.658 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:53.658 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:53.658 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:53.917 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:53.917 "name": "BaseBdev1", 00:12:53.917 "aliases": [ 00:12:53.917 "19c7ca0c-42d0-11ef-96ac-773515fba644" 00:12:53.917 ], 00:12:53.917 "product_name": "Malloc disk", 00:12:53.917 "block_size": 512, 00:12:53.917 "num_blocks": 65536, 00:12:53.917 "uuid": "19c7ca0c-42d0-11ef-96ac-773515fba644", 00:12:53.917 "assigned_rate_limits": { 00:12:53.917 "rw_ios_per_sec": 0, 00:12:53.917 "rw_mbytes_per_sec": 0, 00:12:53.917 "r_mbytes_per_sec": 0, 00:12:53.917 "w_mbytes_per_sec": 0 00:12:53.917 }, 00:12:53.917 "claimed": true, 00:12:53.917 "claim_type": "exclusive_write", 00:12:53.917 "zoned": false, 00:12:53.917 "supported_io_types": { 00:12:53.917 "read": true, 00:12:53.917 "write": true, 00:12:53.917 "unmap": true, 00:12:53.917 "flush": true, 00:12:53.917 "reset": true, 00:12:53.917 "nvme_admin": false, 00:12:53.917 "nvme_io": false, 00:12:53.917 "nvme_io_md": false, 00:12:53.917 "write_zeroes": true, 00:12:53.917 "zcopy": true, 00:12:53.917 "get_zone_info": false, 00:12:53.917 "zone_management": false, 00:12:53.917 "zone_append": false, 00:12:53.917 "compare": false, 00:12:53.917 "compare_and_write": false, 00:12:53.917 "abort": true, 00:12:53.917 "seek_hole": false, 00:12:53.917 "seek_data": false, 00:12:53.917 "copy": true, 00:12:53.917 "nvme_iov_md": false 00:12:53.917 }, 00:12:53.917 "memory_domains": [ 00:12:53.917 { 00:12:53.917 "dma_device_id": "system", 00:12:53.917 "dma_device_type": 1 00:12:53.917 }, 00:12:53.917 { 00:12:53.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.917 "dma_device_type": 2 00:12:53.917 } 00:12:53.917 ], 00:12:53.917 "driver_specific": {} 00:12:53.917 }' 00:12:53.917 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:53.917 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:53.917 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:53.917 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:53.917 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:53.917 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:53.917 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:53.917 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:53.917 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:53.917 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:53.917 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:53.917 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:53.917 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:53.917 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:53.917 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:54.175 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:54.175 "name": "BaseBdev2", 00:12:54.175 "aliases": [ 00:12:54.175 "1b3df2bc-42d0-11ef-96ac-773515fba644" 00:12:54.175 ], 00:12:54.175 "product_name": "Malloc disk", 00:12:54.175 "block_size": 512, 00:12:54.175 "num_blocks": 65536, 00:12:54.175 "uuid": "1b3df2bc-42d0-11ef-96ac-773515fba644", 00:12:54.175 "assigned_rate_limits": { 00:12:54.175 "rw_ios_per_sec": 0, 00:12:54.175 "rw_mbytes_per_sec": 0, 00:12:54.175 "r_mbytes_per_sec": 0, 00:12:54.175 "w_mbytes_per_sec": 0 00:12:54.175 }, 00:12:54.175 "claimed": true, 00:12:54.175 "claim_type": "exclusive_write", 00:12:54.175 "zoned": false, 00:12:54.175 "supported_io_types": { 00:12:54.175 "read": true, 00:12:54.175 "write": true, 00:12:54.175 "unmap": true, 00:12:54.175 "flush": true, 00:12:54.175 "reset": true, 00:12:54.175 "nvme_admin": false, 00:12:54.175 "nvme_io": false, 00:12:54.175 "nvme_io_md": false, 00:12:54.175 "write_zeroes": true, 00:12:54.175 "zcopy": true, 00:12:54.175 "get_zone_info": false, 00:12:54.175 "zone_management": false, 00:12:54.175 "zone_append": false, 00:12:54.175 "compare": false, 00:12:54.175 "compare_and_write": false, 00:12:54.175 "abort": true, 00:12:54.175 "seek_hole": false, 00:12:54.175 "seek_data": false, 00:12:54.175 "copy": true, 00:12:54.175 "nvme_iov_md": false 00:12:54.175 }, 00:12:54.175 "memory_domains": [ 00:12:54.175 { 00:12:54.175 "dma_device_id": "system", 00:12:54.175 "dma_device_type": 1 00:12:54.175 }, 00:12:54.175 { 00:12:54.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.175 "dma_device_type": 2 00:12:54.175 } 00:12:54.175 ], 00:12:54.175 "driver_specific": {} 00:12:54.175 }' 00:12:54.175 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:54.175 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:54.175 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:54.175 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:54.175 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:54.175 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:54.175 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:54.175 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:54.175 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:54.175 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:54.175 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:54.175 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:54.175 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:54.175 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:54.175 17:31:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:54.432 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:54.432 "name": "BaseBdev3", 00:12:54.432 "aliases": [ 00:12:54.432 "1c00aca4-42d0-11ef-96ac-773515fba644" 00:12:54.432 ], 00:12:54.432 "product_name": "Malloc disk", 00:12:54.432 "block_size": 512, 00:12:54.432 "num_blocks": 65536, 00:12:54.432 "uuid": "1c00aca4-42d0-11ef-96ac-773515fba644", 00:12:54.432 "assigned_rate_limits": { 00:12:54.432 "rw_ios_per_sec": 0, 00:12:54.432 "rw_mbytes_per_sec": 0, 00:12:54.432 "r_mbytes_per_sec": 0, 00:12:54.432 "w_mbytes_per_sec": 0 00:12:54.432 }, 00:12:54.432 "claimed": true, 00:12:54.432 "claim_type": "exclusive_write", 00:12:54.432 "zoned": false, 00:12:54.432 "supported_io_types": { 00:12:54.432 "read": true, 00:12:54.432 "write": true, 00:12:54.432 "unmap": true, 00:12:54.432 "flush": true, 00:12:54.432 "reset": true, 00:12:54.432 "nvme_admin": false, 00:12:54.432 "nvme_io": false, 00:12:54.432 "nvme_io_md": false, 00:12:54.432 "write_zeroes": true, 00:12:54.432 "zcopy": true, 00:12:54.432 "get_zone_info": false, 00:12:54.432 "zone_management": false, 00:12:54.432 "zone_append": false, 00:12:54.432 "compare": false, 00:12:54.432 "compare_and_write": false, 00:12:54.432 "abort": true, 00:12:54.432 "seek_hole": false, 00:12:54.432 "seek_data": false, 00:12:54.432 "copy": true, 00:12:54.432 "nvme_iov_md": false 00:12:54.432 }, 00:12:54.432 "memory_domains": [ 00:12:54.432 { 00:12:54.432 "dma_device_id": "system", 00:12:54.432 "dma_device_type": 1 00:12:54.432 }, 00:12:54.432 { 00:12:54.432 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.432 "dma_device_type": 2 00:12:54.432 } 00:12:54.432 ], 00:12:54.432 "driver_specific": {} 00:12:54.432 }' 00:12:54.432 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:54.432 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:54.432 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:54.432 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:54.432 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:54.432 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:54.432 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:54.432 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:54.432 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:54.432 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:54.432 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:54.432 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:54.432 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:54.432 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:12:54.432 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:54.689 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:54.689 "name": "BaseBdev4", 00:12:54.689 "aliases": [ 00:12:54.689 "1cd5b1f8-42d0-11ef-96ac-773515fba644" 00:12:54.689 ], 00:12:54.689 "product_name": "Malloc disk", 00:12:54.689 "block_size": 512, 00:12:54.689 "num_blocks": 65536, 00:12:54.689 "uuid": "1cd5b1f8-42d0-11ef-96ac-773515fba644", 00:12:54.689 "assigned_rate_limits": { 00:12:54.689 "rw_ios_per_sec": 0, 00:12:54.689 "rw_mbytes_per_sec": 0, 00:12:54.689 "r_mbytes_per_sec": 0, 00:12:54.689 "w_mbytes_per_sec": 0 00:12:54.689 }, 00:12:54.689 "claimed": true, 00:12:54.689 "claim_type": "exclusive_write", 00:12:54.689 "zoned": false, 00:12:54.689 "supported_io_types": { 00:12:54.689 "read": true, 00:12:54.689 "write": true, 00:12:54.689 "unmap": true, 00:12:54.689 "flush": true, 00:12:54.689 "reset": true, 00:12:54.689 "nvme_admin": false, 00:12:54.689 "nvme_io": false, 00:12:54.689 "nvme_io_md": false, 00:12:54.689 "write_zeroes": true, 00:12:54.689 "zcopy": true, 00:12:54.689 "get_zone_info": false, 00:12:54.689 "zone_management": false, 00:12:54.689 "zone_append": false, 00:12:54.689 "compare": false, 00:12:54.689 "compare_and_write": false, 00:12:54.689 "abort": true, 00:12:54.689 "seek_hole": false, 00:12:54.689 "seek_data": false, 00:12:54.689 "copy": true, 00:12:54.689 "nvme_iov_md": false 00:12:54.689 }, 00:12:54.689 "memory_domains": [ 00:12:54.689 { 00:12:54.689 "dma_device_id": "system", 00:12:54.689 "dma_device_type": 1 00:12:54.689 }, 00:12:54.689 { 00:12:54.689 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.689 "dma_device_type": 2 00:12:54.689 } 00:12:54.689 ], 00:12:54.689 "driver_specific": {} 00:12:54.689 }' 00:12:54.689 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:54.689 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:54.689 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:54.689 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:54.689 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:54.689 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:54.689 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:54.947 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:54.947 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:54.947 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:54.947 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:54.947 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:54.947 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:55.205 [2024-07-15 17:31:50.813394] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:55.205 [2024-07-15 17:31:50.813421] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:55.205 [2024-07-15 17:31:50.813437] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:55.205 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:55.205 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:12:55.205 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:55.205 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:55.205 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:12:55.205 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:12:55.205 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:55.205 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:12:55.205 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:55.205 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:55.205 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:55.205 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:55.205 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:55.205 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:55.205 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:55.205 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:55.205 17:31:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.462 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:55.462 "name": "Existed_Raid", 00:12:55.462 "uuid": "1cd5b9d2-42d0-11ef-96ac-773515fba644", 00:12:55.462 "strip_size_kb": 64, 00:12:55.462 "state": "offline", 00:12:55.462 "raid_level": "raid0", 00:12:55.462 "superblock": false, 00:12:55.462 "num_base_bdevs": 4, 00:12:55.462 "num_base_bdevs_discovered": 3, 00:12:55.462 "num_base_bdevs_operational": 3, 00:12:55.462 "base_bdevs_list": [ 00:12:55.462 { 00:12:55.462 "name": null, 00:12:55.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.462 "is_configured": false, 00:12:55.462 "data_offset": 0, 00:12:55.462 "data_size": 65536 00:12:55.462 }, 00:12:55.462 { 00:12:55.462 "name": "BaseBdev2", 00:12:55.462 "uuid": "1b3df2bc-42d0-11ef-96ac-773515fba644", 00:12:55.462 "is_configured": true, 00:12:55.462 "data_offset": 0, 00:12:55.462 "data_size": 65536 00:12:55.462 }, 00:12:55.462 { 00:12:55.462 "name": "BaseBdev3", 00:12:55.462 "uuid": "1c00aca4-42d0-11ef-96ac-773515fba644", 00:12:55.462 "is_configured": true, 00:12:55.462 "data_offset": 0, 00:12:55.462 "data_size": 65536 00:12:55.462 }, 00:12:55.462 { 00:12:55.462 "name": "BaseBdev4", 00:12:55.462 "uuid": "1cd5b1f8-42d0-11ef-96ac-773515fba644", 00:12:55.462 "is_configured": true, 00:12:55.462 "data_offset": 0, 00:12:55.462 "data_size": 65536 00:12:55.462 } 00:12:55.462 ] 00:12:55.462 }' 00:12:55.462 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:55.462 17:31:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.720 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:12:55.720 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:55.720 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:55.720 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:55.978 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:55.978 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:55.978 17:31:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:56.236 [2024-07-15 17:31:51.991837] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:56.236 17:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:56.236 17:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:56.236 17:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:56.236 17:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:56.494 17:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:56.494 17:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:56.494 17:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:56.752 [2024-07-15 17:31:52.581690] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:57.010 17:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:57.010 17:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:57.010 17:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:57.010 17:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:57.267 17:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:57.267 17:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:57.267 17:31:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:12:57.525 [2024-07-15 17:31:53.112075] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:57.525 [2024-07-15 17:31:53.112107] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x301c4834a00 name Existed_Raid, state offline 00:12:57.525 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:57.525 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:57.525 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:12:57.525 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:57.785 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:12:57.785 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:12:57.785 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:12:57.786 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:12:57.786 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:57.786 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:58.043 BaseBdev2 00:12:58.043 17:31:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:12:58.043 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:12:58.043 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:58.043 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:58.043 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:58.043 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:58.043 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:58.301 17:31:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:58.558 [ 00:12:58.558 { 00:12:58.558 "name": "BaseBdev2", 00:12:58.558 "aliases": [ 00:12:58.558 "20729eef-42d0-11ef-96ac-773515fba644" 00:12:58.558 ], 00:12:58.558 "product_name": "Malloc disk", 00:12:58.558 "block_size": 512, 00:12:58.558 "num_blocks": 65536, 00:12:58.558 "uuid": "20729eef-42d0-11ef-96ac-773515fba644", 00:12:58.558 "assigned_rate_limits": { 00:12:58.558 "rw_ios_per_sec": 0, 00:12:58.558 "rw_mbytes_per_sec": 0, 00:12:58.558 "r_mbytes_per_sec": 0, 00:12:58.559 "w_mbytes_per_sec": 0 00:12:58.559 }, 00:12:58.559 "claimed": false, 00:12:58.559 "zoned": false, 00:12:58.559 "supported_io_types": { 00:12:58.559 "read": true, 00:12:58.559 "write": true, 00:12:58.559 "unmap": true, 00:12:58.559 "flush": true, 00:12:58.559 "reset": true, 00:12:58.559 "nvme_admin": false, 00:12:58.559 "nvme_io": false, 00:12:58.559 "nvme_io_md": false, 00:12:58.559 "write_zeroes": true, 00:12:58.559 "zcopy": true, 00:12:58.559 "get_zone_info": false, 00:12:58.559 "zone_management": false, 00:12:58.559 "zone_append": false, 00:12:58.559 "compare": false, 00:12:58.559 "compare_and_write": false, 00:12:58.559 "abort": true, 00:12:58.559 "seek_hole": false, 00:12:58.559 "seek_data": false, 00:12:58.559 "copy": true, 00:12:58.559 "nvme_iov_md": false 00:12:58.559 }, 00:12:58.559 "memory_domains": [ 00:12:58.559 { 00:12:58.559 "dma_device_id": "system", 00:12:58.559 "dma_device_type": 1 00:12:58.559 }, 00:12:58.559 { 00:12:58.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.559 "dma_device_type": 2 00:12:58.559 } 00:12:58.559 ], 00:12:58.559 "driver_specific": {} 00:12:58.559 } 00:12:58.559 ] 00:12:58.559 17:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:58.559 17:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:58.559 17:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:58.559 17:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:58.816 BaseBdev3 00:12:58.816 17:31:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:12:58.816 17:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:12:58.816 17:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:58.816 17:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:58.816 17:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:58.816 17:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:58.816 17:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:59.089 17:31:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:59.348 [ 00:12:59.348 { 00:12:59.348 "name": "BaseBdev3", 00:12:59.348 "aliases": [ 00:12:59.348 "20ecb2a6-42d0-11ef-96ac-773515fba644" 00:12:59.348 ], 00:12:59.348 "product_name": "Malloc disk", 00:12:59.348 "block_size": 512, 00:12:59.348 "num_blocks": 65536, 00:12:59.348 "uuid": "20ecb2a6-42d0-11ef-96ac-773515fba644", 00:12:59.348 "assigned_rate_limits": { 00:12:59.348 "rw_ios_per_sec": 0, 00:12:59.348 "rw_mbytes_per_sec": 0, 00:12:59.348 "r_mbytes_per_sec": 0, 00:12:59.348 "w_mbytes_per_sec": 0 00:12:59.348 }, 00:12:59.348 "claimed": false, 00:12:59.348 "zoned": false, 00:12:59.348 "supported_io_types": { 00:12:59.348 "read": true, 00:12:59.348 "write": true, 00:12:59.348 "unmap": true, 00:12:59.348 "flush": true, 00:12:59.348 "reset": true, 00:12:59.348 "nvme_admin": false, 00:12:59.348 "nvme_io": false, 00:12:59.348 "nvme_io_md": false, 00:12:59.348 "write_zeroes": true, 00:12:59.348 "zcopy": true, 00:12:59.348 "get_zone_info": false, 00:12:59.348 "zone_management": false, 00:12:59.348 "zone_append": false, 00:12:59.348 "compare": false, 00:12:59.348 "compare_and_write": false, 00:12:59.348 "abort": true, 00:12:59.348 "seek_hole": false, 00:12:59.348 "seek_data": false, 00:12:59.348 "copy": true, 00:12:59.348 "nvme_iov_md": false 00:12:59.348 }, 00:12:59.348 "memory_domains": [ 00:12:59.348 { 00:12:59.348 "dma_device_id": "system", 00:12:59.348 "dma_device_type": 1 00:12:59.348 }, 00:12:59.348 { 00:12:59.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:59.348 "dma_device_type": 2 00:12:59.348 } 00:12:59.348 ], 00:12:59.348 "driver_specific": {} 00:12:59.348 } 00:12:59.348 ] 00:12:59.348 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:59.348 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:59.348 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:59.348 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:12:59.606 BaseBdev4 00:12:59.606 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:12:59.606 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:12:59.606 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:59.606 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:59.606 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:59.606 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:59.606 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:59.864 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:00.121 [ 00:13:00.121 { 00:13:00.121 "name": "BaseBdev4", 00:13:00.121 "aliases": [ 00:13:00.121 "216fedbe-42d0-11ef-96ac-773515fba644" 00:13:00.121 ], 00:13:00.121 "product_name": "Malloc disk", 00:13:00.121 "block_size": 512, 00:13:00.121 "num_blocks": 65536, 00:13:00.121 "uuid": "216fedbe-42d0-11ef-96ac-773515fba644", 00:13:00.121 "assigned_rate_limits": { 00:13:00.121 "rw_ios_per_sec": 0, 00:13:00.121 "rw_mbytes_per_sec": 0, 00:13:00.121 "r_mbytes_per_sec": 0, 00:13:00.121 "w_mbytes_per_sec": 0 00:13:00.121 }, 00:13:00.121 "claimed": false, 00:13:00.121 "zoned": false, 00:13:00.121 "supported_io_types": { 00:13:00.121 "read": true, 00:13:00.121 "write": true, 00:13:00.121 "unmap": true, 00:13:00.121 "flush": true, 00:13:00.121 "reset": true, 00:13:00.122 "nvme_admin": false, 00:13:00.122 "nvme_io": false, 00:13:00.122 "nvme_io_md": false, 00:13:00.122 "write_zeroes": true, 00:13:00.122 "zcopy": true, 00:13:00.122 "get_zone_info": false, 00:13:00.122 "zone_management": false, 00:13:00.122 "zone_append": false, 00:13:00.122 "compare": false, 00:13:00.122 "compare_and_write": false, 00:13:00.122 "abort": true, 00:13:00.122 "seek_hole": false, 00:13:00.122 "seek_data": false, 00:13:00.122 "copy": true, 00:13:00.122 "nvme_iov_md": false 00:13:00.122 }, 00:13:00.122 "memory_domains": [ 00:13:00.122 { 00:13:00.122 "dma_device_id": "system", 00:13:00.122 "dma_device_type": 1 00:13:00.122 }, 00:13:00.122 { 00:13:00.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.122 "dma_device_type": 2 00:13:00.122 } 00:13:00.122 ], 00:13:00.122 "driver_specific": {} 00:13:00.122 } 00:13:00.122 ] 00:13:00.122 17:31:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:00.122 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:00.122 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:00.122 17:31:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:00.380 [2024-07-15 17:31:56.134985] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:00.380 [2024-07-15 17:31:56.135053] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:00.380 [2024-07-15 17:31:56.135082] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:00.380 [2024-07-15 17:31:56.135647] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:00.380 [2024-07-15 17:31:56.135666] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:00.380 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:00.380 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:00.380 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:00.380 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:00.380 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:00.380 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:00.380 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:00.380 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:00.380 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:00.380 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:00.380 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:00.380 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.946 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:00.946 "name": "Existed_Raid", 00:13:00.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.946 "strip_size_kb": 64, 00:13:00.946 "state": "configuring", 00:13:00.946 "raid_level": "raid0", 00:13:00.946 "superblock": false, 00:13:00.946 "num_base_bdevs": 4, 00:13:00.946 "num_base_bdevs_discovered": 3, 00:13:00.946 "num_base_bdevs_operational": 4, 00:13:00.946 "base_bdevs_list": [ 00:13:00.946 { 00:13:00.946 "name": "BaseBdev1", 00:13:00.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.946 "is_configured": false, 00:13:00.946 "data_offset": 0, 00:13:00.946 "data_size": 0 00:13:00.946 }, 00:13:00.946 { 00:13:00.946 "name": "BaseBdev2", 00:13:00.946 "uuid": "20729eef-42d0-11ef-96ac-773515fba644", 00:13:00.946 "is_configured": true, 00:13:00.946 "data_offset": 0, 00:13:00.946 "data_size": 65536 00:13:00.946 }, 00:13:00.946 { 00:13:00.946 "name": "BaseBdev3", 00:13:00.946 "uuid": "20ecb2a6-42d0-11ef-96ac-773515fba644", 00:13:00.946 "is_configured": true, 00:13:00.946 "data_offset": 0, 00:13:00.946 "data_size": 65536 00:13:00.946 }, 00:13:00.946 { 00:13:00.946 "name": "BaseBdev4", 00:13:00.946 "uuid": "216fedbe-42d0-11ef-96ac-773515fba644", 00:13:00.946 "is_configured": true, 00:13:00.946 "data_offset": 0, 00:13:00.946 "data_size": 65536 00:13:00.946 } 00:13:00.946 ] 00:13:00.946 }' 00:13:00.946 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:00.946 17:31:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.204 17:31:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:13:01.463 [2024-07-15 17:31:57.047047] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:01.463 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:01.463 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:01.463 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:01.463 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:01.463 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:01.463 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:01.463 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:01.463 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:01.463 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:01.463 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:01.463 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:01.463 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.722 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:01.722 "name": "Existed_Raid", 00:13:01.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.722 "strip_size_kb": 64, 00:13:01.722 "state": "configuring", 00:13:01.722 "raid_level": "raid0", 00:13:01.722 "superblock": false, 00:13:01.722 "num_base_bdevs": 4, 00:13:01.722 "num_base_bdevs_discovered": 2, 00:13:01.722 "num_base_bdevs_operational": 4, 00:13:01.722 "base_bdevs_list": [ 00:13:01.722 { 00:13:01.722 "name": "BaseBdev1", 00:13:01.722 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.722 "is_configured": false, 00:13:01.722 "data_offset": 0, 00:13:01.722 "data_size": 0 00:13:01.722 }, 00:13:01.722 { 00:13:01.722 "name": null, 00:13:01.722 "uuid": "20729eef-42d0-11ef-96ac-773515fba644", 00:13:01.722 "is_configured": false, 00:13:01.722 "data_offset": 0, 00:13:01.722 "data_size": 65536 00:13:01.722 }, 00:13:01.722 { 00:13:01.722 "name": "BaseBdev3", 00:13:01.722 "uuid": "20ecb2a6-42d0-11ef-96ac-773515fba644", 00:13:01.722 "is_configured": true, 00:13:01.722 "data_offset": 0, 00:13:01.722 "data_size": 65536 00:13:01.722 }, 00:13:01.722 { 00:13:01.722 "name": "BaseBdev4", 00:13:01.722 "uuid": "216fedbe-42d0-11ef-96ac-773515fba644", 00:13:01.722 "is_configured": true, 00:13:01.722 "data_offset": 0, 00:13:01.722 "data_size": 65536 00:13:01.722 } 00:13:01.722 ] 00:13:01.722 }' 00:13:01.722 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:01.722 17:31:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.980 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:01.980 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:02.239 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:13:02.239 17:31:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:02.497 [2024-07-15 17:31:58.223267] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:02.497 BaseBdev1 00:13:02.497 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:13:02.497 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:02.497 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:02.497 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:02.498 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:02.498 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:02.498 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:02.758 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:03.016 [ 00:13:03.016 { 00:13:03.016 "name": "BaseBdev1", 00:13:03.016 "aliases": [ 00:13:03.016 "23280724-42d0-11ef-96ac-773515fba644" 00:13:03.016 ], 00:13:03.016 "product_name": "Malloc disk", 00:13:03.016 "block_size": 512, 00:13:03.016 "num_blocks": 65536, 00:13:03.016 "uuid": "23280724-42d0-11ef-96ac-773515fba644", 00:13:03.016 "assigned_rate_limits": { 00:13:03.016 "rw_ios_per_sec": 0, 00:13:03.016 "rw_mbytes_per_sec": 0, 00:13:03.016 "r_mbytes_per_sec": 0, 00:13:03.016 "w_mbytes_per_sec": 0 00:13:03.016 }, 00:13:03.016 "claimed": true, 00:13:03.016 "claim_type": "exclusive_write", 00:13:03.016 "zoned": false, 00:13:03.016 "supported_io_types": { 00:13:03.016 "read": true, 00:13:03.016 "write": true, 00:13:03.016 "unmap": true, 00:13:03.016 "flush": true, 00:13:03.016 "reset": true, 00:13:03.016 "nvme_admin": false, 00:13:03.016 "nvme_io": false, 00:13:03.016 "nvme_io_md": false, 00:13:03.016 "write_zeroes": true, 00:13:03.016 "zcopy": true, 00:13:03.016 "get_zone_info": false, 00:13:03.016 "zone_management": false, 00:13:03.016 "zone_append": false, 00:13:03.016 "compare": false, 00:13:03.016 "compare_and_write": false, 00:13:03.016 "abort": true, 00:13:03.016 "seek_hole": false, 00:13:03.016 "seek_data": false, 00:13:03.016 "copy": true, 00:13:03.016 "nvme_iov_md": false 00:13:03.016 }, 00:13:03.016 "memory_domains": [ 00:13:03.016 { 00:13:03.016 "dma_device_id": "system", 00:13:03.016 "dma_device_type": 1 00:13:03.016 }, 00:13:03.016 { 00:13:03.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.016 "dma_device_type": 2 00:13:03.016 } 00:13:03.016 ], 00:13:03.016 "driver_specific": {} 00:13:03.016 } 00:13:03.016 ] 00:13:03.016 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:03.016 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:03.016 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:03.016 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:03.016 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:03.016 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:03.016 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:03.016 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:03.016 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:03.016 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:03.016 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:03.017 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:03.017 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.275 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:03.275 "name": "Existed_Raid", 00:13:03.275 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.275 "strip_size_kb": 64, 00:13:03.275 "state": "configuring", 00:13:03.275 "raid_level": "raid0", 00:13:03.275 "superblock": false, 00:13:03.275 "num_base_bdevs": 4, 00:13:03.275 "num_base_bdevs_discovered": 3, 00:13:03.275 "num_base_bdevs_operational": 4, 00:13:03.275 "base_bdevs_list": [ 00:13:03.275 { 00:13:03.275 "name": "BaseBdev1", 00:13:03.275 "uuid": "23280724-42d0-11ef-96ac-773515fba644", 00:13:03.275 "is_configured": true, 00:13:03.275 "data_offset": 0, 00:13:03.275 "data_size": 65536 00:13:03.275 }, 00:13:03.275 { 00:13:03.275 "name": null, 00:13:03.275 "uuid": "20729eef-42d0-11ef-96ac-773515fba644", 00:13:03.275 "is_configured": false, 00:13:03.275 "data_offset": 0, 00:13:03.275 "data_size": 65536 00:13:03.275 }, 00:13:03.275 { 00:13:03.275 "name": "BaseBdev3", 00:13:03.275 "uuid": "20ecb2a6-42d0-11ef-96ac-773515fba644", 00:13:03.275 "is_configured": true, 00:13:03.275 "data_offset": 0, 00:13:03.275 "data_size": 65536 00:13:03.275 }, 00:13:03.275 { 00:13:03.275 "name": "BaseBdev4", 00:13:03.275 "uuid": "216fedbe-42d0-11ef-96ac-773515fba644", 00:13:03.275 "is_configured": true, 00:13:03.275 "data_offset": 0, 00:13:03.275 "data_size": 65536 00:13:03.275 } 00:13:03.275 ] 00:13:03.275 }' 00:13:03.275 17:31:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:03.275 17:31:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.533 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:03.533 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:03.793 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:13:03.793 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:13:04.052 [2024-07-15 17:31:59.763173] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:04.052 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:04.052 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:04.052 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:04.052 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:04.052 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:04.052 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:04.052 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:04.052 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:04.052 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:04.052 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:04.052 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:04.052 17:31:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.311 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:04.311 "name": "Existed_Raid", 00:13:04.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.311 "strip_size_kb": 64, 00:13:04.311 "state": "configuring", 00:13:04.311 "raid_level": "raid0", 00:13:04.311 "superblock": false, 00:13:04.311 "num_base_bdevs": 4, 00:13:04.311 "num_base_bdevs_discovered": 2, 00:13:04.311 "num_base_bdevs_operational": 4, 00:13:04.311 "base_bdevs_list": [ 00:13:04.311 { 00:13:04.311 "name": "BaseBdev1", 00:13:04.311 "uuid": "23280724-42d0-11ef-96ac-773515fba644", 00:13:04.311 "is_configured": true, 00:13:04.311 "data_offset": 0, 00:13:04.311 "data_size": 65536 00:13:04.311 }, 00:13:04.311 { 00:13:04.311 "name": null, 00:13:04.311 "uuid": "20729eef-42d0-11ef-96ac-773515fba644", 00:13:04.311 "is_configured": false, 00:13:04.311 "data_offset": 0, 00:13:04.311 "data_size": 65536 00:13:04.311 }, 00:13:04.311 { 00:13:04.311 "name": null, 00:13:04.311 "uuid": "20ecb2a6-42d0-11ef-96ac-773515fba644", 00:13:04.311 "is_configured": false, 00:13:04.311 "data_offset": 0, 00:13:04.311 "data_size": 65536 00:13:04.311 }, 00:13:04.311 { 00:13:04.311 "name": "BaseBdev4", 00:13:04.311 "uuid": "216fedbe-42d0-11ef-96ac-773515fba644", 00:13:04.311 "is_configured": true, 00:13:04.311 "data_offset": 0, 00:13:04.311 "data_size": 65536 00:13:04.311 } 00:13:04.311 ] 00:13:04.311 }' 00:13:04.311 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:04.311 17:32:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.569 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:04.569 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:04.827 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:13:04.827 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:05.085 [2024-07-15 17:32:00.883306] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:05.085 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:05.085 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:05.085 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:05.085 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:05.085 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:05.085 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:05.085 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:05.085 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:05.085 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:05.085 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:05.085 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:05.085 17:32:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:05.343 17:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:05.343 "name": "Existed_Raid", 00:13:05.343 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:05.343 "strip_size_kb": 64, 00:13:05.343 "state": "configuring", 00:13:05.343 "raid_level": "raid0", 00:13:05.343 "superblock": false, 00:13:05.343 "num_base_bdevs": 4, 00:13:05.343 "num_base_bdevs_discovered": 3, 00:13:05.343 "num_base_bdevs_operational": 4, 00:13:05.343 "base_bdevs_list": [ 00:13:05.343 { 00:13:05.343 "name": "BaseBdev1", 00:13:05.343 "uuid": "23280724-42d0-11ef-96ac-773515fba644", 00:13:05.343 "is_configured": true, 00:13:05.343 "data_offset": 0, 00:13:05.343 "data_size": 65536 00:13:05.343 }, 00:13:05.343 { 00:13:05.343 "name": null, 00:13:05.343 "uuid": "20729eef-42d0-11ef-96ac-773515fba644", 00:13:05.343 "is_configured": false, 00:13:05.343 "data_offset": 0, 00:13:05.343 "data_size": 65536 00:13:05.343 }, 00:13:05.343 { 00:13:05.343 "name": "BaseBdev3", 00:13:05.343 "uuid": "20ecb2a6-42d0-11ef-96ac-773515fba644", 00:13:05.343 "is_configured": true, 00:13:05.343 "data_offset": 0, 00:13:05.343 "data_size": 65536 00:13:05.343 }, 00:13:05.343 { 00:13:05.343 "name": "BaseBdev4", 00:13:05.343 "uuid": "216fedbe-42d0-11ef-96ac-773515fba644", 00:13:05.343 "is_configured": true, 00:13:05.343 "data_offset": 0, 00:13:05.343 "data_size": 65536 00:13:05.343 } 00:13:05.343 ] 00:13:05.343 }' 00:13:05.343 17:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:05.343 17:32:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.908 17:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:05.908 17:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:06.214 17:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:13:06.214 17:32:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:06.214 [2024-07-15 17:32:01.983359] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:06.214 17:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:06.214 17:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:06.214 17:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:06.214 17:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:06.214 17:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:06.214 17:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:06.214 17:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:06.214 17:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:06.214 17:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:06.214 17:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:06.214 17:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:06.214 17:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.471 17:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:06.471 "name": "Existed_Raid", 00:13:06.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.471 "strip_size_kb": 64, 00:13:06.471 "state": "configuring", 00:13:06.471 "raid_level": "raid0", 00:13:06.471 "superblock": false, 00:13:06.471 "num_base_bdevs": 4, 00:13:06.471 "num_base_bdevs_discovered": 2, 00:13:06.471 "num_base_bdevs_operational": 4, 00:13:06.471 "base_bdevs_list": [ 00:13:06.471 { 00:13:06.471 "name": null, 00:13:06.471 "uuid": "23280724-42d0-11ef-96ac-773515fba644", 00:13:06.471 "is_configured": false, 00:13:06.471 "data_offset": 0, 00:13:06.471 "data_size": 65536 00:13:06.471 }, 00:13:06.471 { 00:13:06.471 "name": null, 00:13:06.471 "uuid": "20729eef-42d0-11ef-96ac-773515fba644", 00:13:06.471 "is_configured": false, 00:13:06.471 "data_offset": 0, 00:13:06.471 "data_size": 65536 00:13:06.471 }, 00:13:06.471 { 00:13:06.471 "name": "BaseBdev3", 00:13:06.471 "uuid": "20ecb2a6-42d0-11ef-96ac-773515fba644", 00:13:06.471 "is_configured": true, 00:13:06.471 "data_offset": 0, 00:13:06.471 "data_size": 65536 00:13:06.471 }, 00:13:06.471 { 00:13:06.471 "name": "BaseBdev4", 00:13:06.471 "uuid": "216fedbe-42d0-11ef-96ac-773515fba644", 00:13:06.471 "is_configured": true, 00:13:06.471 "data_offset": 0, 00:13:06.471 "data_size": 65536 00:13:06.471 } 00:13:06.471 ] 00:13:06.471 }' 00:13:06.471 17:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:06.471 17:32:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.727 17:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:06.727 17:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:06.984 17:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:13:06.984 17:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:07.241 [2024-07-15 17:32:02.978474] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:07.241 17:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:07.241 17:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:07.241 17:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:07.241 17:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:07.241 17:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:07.241 17:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:07.241 17:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:07.241 17:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:07.241 17:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:07.241 17:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:07.241 17:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:07.241 17:32:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.498 17:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:07.499 "name": "Existed_Raid", 00:13:07.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.499 "strip_size_kb": 64, 00:13:07.499 "state": "configuring", 00:13:07.499 "raid_level": "raid0", 00:13:07.499 "superblock": false, 00:13:07.499 "num_base_bdevs": 4, 00:13:07.499 "num_base_bdevs_discovered": 3, 00:13:07.499 "num_base_bdevs_operational": 4, 00:13:07.499 "base_bdevs_list": [ 00:13:07.499 { 00:13:07.499 "name": null, 00:13:07.499 "uuid": "23280724-42d0-11ef-96ac-773515fba644", 00:13:07.499 "is_configured": false, 00:13:07.499 "data_offset": 0, 00:13:07.499 "data_size": 65536 00:13:07.499 }, 00:13:07.499 { 00:13:07.499 "name": "BaseBdev2", 00:13:07.499 "uuid": "20729eef-42d0-11ef-96ac-773515fba644", 00:13:07.499 "is_configured": true, 00:13:07.499 "data_offset": 0, 00:13:07.499 "data_size": 65536 00:13:07.499 }, 00:13:07.499 { 00:13:07.499 "name": "BaseBdev3", 00:13:07.499 "uuid": "20ecb2a6-42d0-11ef-96ac-773515fba644", 00:13:07.499 "is_configured": true, 00:13:07.499 "data_offset": 0, 00:13:07.499 "data_size": 65536 00:13:07.499 }, 00:13:07.499 { 00:13:07.499 "name": "BaseBdev4", 00:13:07.499 "uuid": "216fedbe-42d0-11ef-96ac-773515fba644", 00:13:07.499 "is_configured": true, 00:13:07.499 "data_offset": 0, 00:13:07.499 "data_size": 65536 00:13:07.499 } 00:13:07.499 ] 00:13:07.499 }' 00:13:07.499 17:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:07.499 17:32:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.756 17:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:07.756 17:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:08.015 17:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:13:08.015 17:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:08.015 17:32:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:08.273 17:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 23280724-42d0-11ef-96ac-773515fba644 00:13:08.531 [2024-07-15 17:32:04.314636] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:08.531 [2024-07-15 17:32:04.314664] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x301c4834f00 00:13:08.531 [2024-07-15 17:32:04.314669] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:08.531 [2024-07-15 17:32:04.314708] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x301c4897e20 00:13:08.531 [2024-07-15 17:32:04.314781] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x301c4834f00 00:13:08.531 [2024-07-15 17:32:04.314785] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x301c4834f00 00:13:08.531 [2024-07-15 17:32:04.314819] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:08.531 NewBaseBdev 00:13:08.531 17:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:13:08.531 17:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:13:08.531 17:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:08.531 17:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:08.531 17:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:08.531 17:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:08.531 17:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:08.788 17:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:09.046 [ 00:13:09.046 { 00:13:09.046 "name": "NewBaseBdev", 00:13:09.046 "aliases": [ 00:13:09.046 "23280724-42d0-11ef-96ac-773515fba644" 00:13:09.047 ], 00:13:09.047 "product_name": "Malloc disk", 00:13:09.047 "block_size": 512, 00:13:09.047 "num_blocks": 65536, 00:13:09.047 "uuid": "23280724-42d0-11ef-96ac-773515fba644", 00:13:09.047 "assigned_rate_limits": { 00:13:09.047 "rw_ios_per_sec": 0, 00:13:09.047 "rw_mbytes_per_sec": 0, 00:13:09.047 "r_mbytes_per_sec": 0, 00:13:09.047 "w_mbytes_per_sec": 0 00:13:09.047 }, 00:13:09.047 "claimed": true, 00:13:09.047 "claim_type": "exclusive_write", 00:13:09.047 "zoned": false, 00:13:09.047 "supported_io_types": { 00:13:09.047 "read": true, 00:13:09.047 "write": true, 00:13:09.047 "unmap": true, 00:13:09.047 "flush": true, 00:13:09.047 "reset": true, 00:13:09.047 "nvme_admin": false, 00:13:09.047 "nvme_io": false, 00:13:09.047 "nvme_io_md": false, 00:13:09.047 "write_zeroes": true, 00:13:09.047 "zcopy": true, 00:13:09.047 "get_zone_info": false, 00:13:09.047 "zone_management": false, 00:13:09.047 "zone_append": false, 00:13:09.047 "compare": false, 00:13:09.047 "compare_and_write": false, 00:13:09.047 "abort": true, 00:13:09.047 "seek_hole": false, 00:13:09.047 "seek_data": false, 00:13:09.047 "copy": true, 00:13:09.047 "nvme_iov_md": false 00:13:09.047 }, 00:13:09.047 "memory_domains": [ 00:13:09.047 { 00:13:09.047 "dma_device_id": "system", 00:13:09.047 "dma_device_type": 1 00:13:09.047 }, 00:13:09.047 { 00:13:09.047 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.047 "dma_device_type": 2 00:13:09.047 } 00:13:09.047 ], 00:13:09.047 "driver_specific": {} 00:13:09.047 } 00:13:09.047 ] 00:13:09.047 17:32:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:09.047 17:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:09.047 17:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:09.047 17:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:09.047 17:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:09.047 17:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:09.047 17:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:09.047 17:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:09.047 17:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:09.047 17:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:09.047 17:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:09.047 17:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:09.047 17:32:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:09.304 17:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:09.304 "name": "Existed_Raid", 00:13:09.304 "uuid": "26c986a7-42d0-11ef-96ac-773515fba644", 00:13:09.304 "strip_size_kb": 64, 00:13:09.304 "state": "online", 00:13:09.304 "raid_level": "raid0", 00:13:09.304 "superblock": false, 00:13:09.304 "num_base_bdevs": 4, 00:13:09.304 "num_base_bdevs_discovered": 4, 00:13:09.304 "num_base_bdevs_operational": 4, 00:13:09.304 "base_bdevs_list": [ 00:13:09.304 { 00:13:09.304 "name": "NewBaseBdev", 00:13:09.304 "uuid": "23280724-42d0-11ef-96ac-773515fba644", 00:13:09.304 "is_configured": true, 00:13:09.304 "data_offset": 0, 00:13:09.304 "data_size": 65536 00:13:09.304 }, 00:13:09.304 { 00:13:09.304 "name": "BaseBdev2", 00:13:09.304 "uuid": "20729eef-42d0-11ef-96ac-773515fba644", 00:13:09.304 "is_configured": true, 00:13:09.304 "data_offset": 0, 00:13:09.304 "data_size": 65536 00:13:09.304 }, 00:13:09.304 { 00:13:09.304 "name": "BaseBdev3", 00:13:09.304 "uuid": "20ecb2a6-42d0-11ef-96ac-773515fba644", 00:13:09.304 "is_configured": true, 00:13:09.304 "data_offset": 0, 00:13:09.304 "data_size": 65536 00:13:09.304 }, 00:13:09.304 { 00:13:09.304 "name": "BaseBdev4", 00:13:09.304 "uuid": "216fedbe-42d0-11ef-96ac-773515fba644", 00:13:09.304 "is_configured": true, 00:13:09.304 "data_offset": 0, 00:13:09.304 "data_size": 65536 00:13:09.304 } 00:13:09.304 ] 00:13:09.304 }' 00:13:09.304 17:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:09.304 17:32:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.562 17:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:13:09.562 17:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:09.562 17:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:09.562 17:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:09.562 17:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:09.562 17:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:09.562 17:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:09.562 17:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:09.820 [2024-07-15 17:32:05.554562] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:09.820 17:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:09.820 "name": "Existed_Raid", 00:13:09.820 "aliases": [ 00:13:09.820 "26c986a7-42d0-11ef-96ac-773515fba644" 00:13:09.820 ], 00:13:09.820 "product_name": "Raid Volume", 00:13:09.820 "block_size": 512, 00:13:09.820 "num_blocks": 262144, 00:13:09.820 "uuid": "26c986a7-42d0-11ef-96ac-773515fba644", 00:13:09.820 "assigned_rate_limits": { 00:13:09.820 "rw_ios_per_sec": 0, 00:13:09.820 "rw_mbytes_per_sec": 0, 00:13:09.820 "r_mbytes_per_sec": 0, 00:13:09.820 "w_mbytes_per_sec": 0 00:13:09.820 }, 00:13:09.820 "claimed": false, 00:13:09.820 "zoned": false, 00:13:09.820 "supported_io_types": { 00:13:09.820 "read": true, 00:13:09.820 "write": true, 00:13:09.820 "unmap": true, 00:13:09.820 "flush": true, 00:13:09.820 "reset": true, 00:13:09.820 "nvme_admin": false, 00:13:09.820 "nvme_io": false, 00:13:09.820 "nvme_io_md": false, 00:13:09.820 "write_zeroes": true, 00:13:09.820 "zcopy": false, 00:13:09.820 "get_zone_info": false, 00:13:09.820 "zone_management": false, 00:13:09.820 "zone_append": false, 00:13:09.820 "compare": false, 00:13:09.820 "compare_and_write": false, 00:13:09.820 "abort": false, 00:13:09.820 "seek_hole": false, 00:13:09.820 "seek_data": false, 00:13:09.820 "copy": false, 00:13:09.820 "nvme_iov_md": false 00:13:09.820 }, 00:13:09.820 "memory_domains": [ 00:13:09.820 { 00:13:09.820 "dma_device_id": "system", 00:13:09.820 "dma_device_type": 1 00:13:09.820 }, 00:13:09.820 { 00:13:09.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.820 "dma_device_type": 2 00:13:09.820 }, 00:13:09.820 { 00:13:09.820 "dma_device_id": "system", 00:13:09.820 "dma_device_type": 1 00:13:09.820 }, 00:13:09.820 { 00:13:09.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.820 "dma_device_type": 2 00:13:09.820 }, 00:13:09.820 { 00:13:09.820 "dma_device_id": "system", 00:13:09.820 "dma_device_type": 1 00:13:09.820 }, 00:13:09.820 { 00:13:09.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.820 "dma_device_type": 2 00:13:09.820 }, 00:13:09.820 { 00:13:09.820 "dma_device_id": "system", 00:13:09.820 "dma_device_type": 1 00:13:09.820 }, 00:13:09.820 { 00:13:09.820 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.820 "dma_device_type": 2 00:13:09.820 } 00:13:09.820 ], 00:13:09.820 "driver_specific": { 00:13:09.820 "raid": { 00:13:09.820 "uuid": "26c986a7-42d0-11ef-96ac-773515fba644", 00:13:09.820 "strip_size_kb": 64, 00:13:09.820 "state": "online", 00:13:09.820 "raid_level": "raid0", 00:13:09.820 "superblock": false, 00:13:09.820 "num_base_bdevs": 4, 00:13:09.820 "num_base_bdevs_discovered": 4, 00:13:09.820 "num_base_bdevs_operational": 4, 00:13:09.820 "base_bdevs_list": [ 00:13:09.820 { 00:13:09.820 "name": "NewBaseBdev", 00:13:09.820 "uuid": "23280724-42d0-11ef-96ac-773515fba644", 00:13:09.820 "is_configured": true, 00:13:09.820 "data_offset": 0, 00:13:09.820 "data_size": 65536 00:13:09.820 }, 00:13:09.820 { 00:13:09.820 "name": "BaseBdev2", 00:13:09.820 "uuid": "20729eef-42d0-11ef-96ac-773515fba644", 00:13:09.820 "is_configured": true, 00:13:09.820 "data_offset": 0, 00:13:09.820 "data_size": 65536 00:13:09.820 }, 00:13:09.820 { 00:13:09.820 "name": "BaseBdev3", 00:13:09.820 "uuid": "20ecb2a6-42d0-11ef-96ac-773515fba644", 00:13:09.820 "is_configured": true, 00:13:09.820 "data_offset": 0, 00:13:09.820 "data_size": 65536 00:13:09.820 }, 00:13:09.820 { 00:13:09.820 "name": "BaseBdev4", 00:13:09.820 "uuid": "216fedbe-42d0-11ef-96ac-773515fba644", 00:13:09.820 "is_configured": true, 00:13:09.820 "data_offset": 0, 00:13:09.820 "data_size": 65536 00:13:09.820 } 00:13:09.820 ] 00:13:09.820 } 00:13:09.820 } 00:13:09.820 }' 00:13:09.821 17:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:09.821 17:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:13:09.821 BaseBdev2 00:13:09.821 BaseBdev3 00:13:09.821 BaseBdev4' 00:13:09.821 17:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:09.821 17:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:13:09.821 17:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:10.078 17:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:10.078 "name": "NewBaseBdev", 00:13:10.078 "aliases": [ 00:13:10.078 "23280724-42d0-11ef-96ac-773515fba644" 00:13:10.078 ], 00:13:10.078 "product_name": "Malloc disk", 00:13:10.078 "block_size": 512, 00:13:10.078 "num_blocks": 65536, 00:13:10.078 "uuid": "23280724-42d0-11ef-96ac-773515fba644", 00:13:10.078 "assigned_rate_limits": { 00:13:10.078 "rw_ios_per_sec": 0, 00:13:10.078 "rw_mbytes_per_sec": 0, 00:13:10.078 "r_mbytes_per_sec": 0, 00:13:10.078 "w_mbytes_per_sec": 0 00:13:10.078 }, 00:13:10.078 "claimed": true, 00:13:10.078 "claim_type": "exclusive_write", 00:13:10.078 "zoned": false, 00:13:10.078 "supported_io_types": { 00:13:10.078 "read": true, 00:13:10.078 "write": true, 00:13:10.078 "unmap": true, 00:13:10.078 "flush": true, 00:13:10.078 "reset": true, 00:13:10.078 "nvme_admin": false, 00:13:10.078 "nvme_io": false, 00:13:10.078 "nvme_io_md": false, 00:13:10.078 "write_zeroes": true, 00:13:10.079 "zcopy": true, 00:13:10.079 "get_zone_info": false, 00:13:10.079 "zone_management": false, 00:13:10.079 "zone_append": false, 00:13:10.079 "compare": false, 00:13:10.079 "compare_and_write": false, 00:13:10.079 "abort": true, 00:13:10.079 "seek_hole": false, 00:13:10.079 "seek_data": false, 00:13:10.079 "copy": true, 00:13:10.079 "nvme_iov_md": false 00:13:10.079 }, 00:13:10.079 "memory_domains": [ 00:13:10.079 { 00:13:10.079 "dma_device_id": "system", 00:13:10.079 "dma_device_type": 1 00:13:10.079 }, 00:13:10.079 { 00:13:10.079 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.079 "dma_device_type": 2 00:13:10.079 } 00:13:10.079 ], 00:13:10.079 "driver_specific": {} 00:13:10.079 }' 00:13:10.079 17:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:10.079 17:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:10.079 17:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:10.079 17:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:10.079 17:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:10.079 17:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:10.079 17:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:10.079 17:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:10.079 17:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:10.079 17:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:10.079 17:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:10.079 17:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:10.079 17:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:10.079 17:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:10.079 17:32:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:10.336 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:10.336 "name": "BaseBdev2", 00:13:10.336 "aliases": [ 00:13:10.336 "20729eef-42d0-11ef-96ac-773515fba644" 00:13:10.336 ], 00:13:10.336 "product_name": "Malloc disk", 00:13:10.336 "block_size": 512, 00:13:10.336 "num_blocks": 65536, 00:13:10.336 "uuid": "20729eef-42d0-11ef-96ac-773515fba644", 00:13:10.336 "assigned_rate_limits": { 00:13:10.336 "rw_ios_per_sec": 0, 00:13:10.336 "rw_mbytes_per_sec": 0, 00:13:10.336 "r_mbytes_per_sec": 0, 00:13:10.336 "w_mbytes_per_sec": 0 00:13:10.336 }, 00:13:10.336 "claimed": true, 00:13:10.336 "claim_type": "exclusive_write", 00:13:10.336 "zoned": false, 00:13:10.336 "supported_io_types": { 00:13:10.336 "read": true, 00:13:10.336 "write": true, 00:13:10.336 "unmap": true, 00:13:10.336 "flush": true, 00:13:10.336 "reset": true, 00:13:10.336 "nvme_admin": false, 00:13:10.336 "nvme_io": false, 00:13:10.336 "nvme_io_md": false, 00:13:10.336 "write_zeroes": true, 00:13:10.336 "zcopy": true, 00:13:10.336 "get_zone_info": false, 00:13:10.336 "zone_management": false, 00:13:10.336 "zone_append": false, 00:13:10.336 "compare": false, 00:13:10.336 "compare_and_write": false, 00:13:10.336 "abort": true, 00:13:10.336 "seek_hole": false, 00:13:10.336 "seek_data": false, 00:13:10.336 "copy": true, 00:13:10.336 "nvme_iov_md": false 00:13:10.336 }, 00:13:10.336 "memory_domains": [ 00:13:10.336 { 00:13:10.336 "dma_device_id": "system", 00:13:10.336 "dma_device_type": 1 00:13:10.336 }, 00:13:10.336 { 00:13:10.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.336 "dma_device_type": 2 00:13:10.336 } 00:13:10.336 ], 00:13:10.336 "driver_specific": {} 00:13:10.336 }' 00:13:10.336 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:10.336 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:10.336 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:10.336 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:10.336 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:10.336 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:10.336 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:10.336 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:10.336 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:10.336 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:10.336 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:10.594 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:10.594 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:10.594 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:10.594 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:10.594 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:10.594 "name": "BaseBdev3", 00:13:10.594 "aliases": [ 00:13:10.594 "20ecb2a6-42d0-11ef-96ac-773515fba644" 00:13:10.594 ], 00:13:10.594 "product_name": "Malloc disk", 00:13:10.594 "block_size": 512, 00:13:10.594 "num_blocks": 65536, 00:13:10.594 "uuid": "20ecb2a6-42d0-11ef-96ac-773515fba644", 00:13:10.594 "assigned_rate_limits": { 00:13:10.594 "rw_ios_per_sec": 0, 00:13:10.594 "rw_mbytes_per_sec": 0, 00:13:10.594 "r_mbytes_per_sec": 0, 00:13:10.594 "w_mbytes_per_sec": 0 00:13:10.594 }, 00:13:10.594 "claimed": true, 00:13:10.594 "claim_type": "exclusive_write", 00:13:10.594 "zoned": false, 00:13:10.594 "supported_io_types": { 00:13:10.594 "read": true, 00:13:10.594 "write": true, 00:13:10.594 "unmap": true, 00:13:10.594 "flush": true, 00:13:10.594 "reset": true, 00:13:10.594 "nvme_admin": false, 00:13:10.594 "nvme_io": false, 00:13:10.594 "nvme_io_md": false, 00:13:10.594 "write_zeroes": true, 00:13:10.594 "zcopy": true, 00:13:10.594 "get_zone_info": false, 00:13:10.594 "zone_management": false, 00:13:10.594 "zone_append": false, 00:13:10.594 "compare": false, 00:13:10.594 "compare_and_write": false, 00:13:10.594 "abort": true, 00:13:10.594 "seek_hole": false, 00:13:10.594 "seek_data": false, 00:13:10.594 "copy": true, 00:13:10.594 "nvme_iov_md": false 00:13:10.594 }, 00:13:10.594 "memory_domains": [ 00:13:10.594 { 00:13:10.594 "dma_device_id": "system", 00:13:10.594 "dma_device_type": 1 00:13:10.594 }, 00:13:10.594 { 00:13:10.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.594 "dma_device_type": 2 00:13:10.594 } 00:13:10.594 ], 00:13:10.594 "driver_specific": {} 00:13:10.594 }' 00:13:10.594 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:10.594 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:10.594 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:10.594 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:10.851 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:10.851 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:10.851 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:10.851 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:10.851 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:10.851 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:10.851 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:10.852 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:10.852 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:10.852 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:13:10.852 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:11.109 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:11.109 "name": "BaseBdev4", 00:13:11.109 "aliases": [ 00:13:11.109 "216fedbe-42d0-11ef-96ac-773515fba644" 00:13:11.109 ], 00:13:11.109 "product_name": "Malloc disk", 00:13:11.109 "block_size": 512, 00:13:11.109 "num_blocks": 65536, 00:13:11.109 "uuid": "216fedbe-42d0-11ef-96ac-773515fba644", 00:13:11.109 "assigned_rate_limits": { 00:13:11.109 "rw_ios_per_sec": 0, 00:13:11.109 "rw_mbytes_per_sec": 0, 00:13:11.109 "r_mbytes_per_sec": 0, 00:13:11.109 "w_mbytes_per_sec": 0 00:13:11.109 }, 00:13:11.109 "claimed": true, 00:13:11.109 "claim_type": "exclusive_write", 00:13:11.109 "zoned": false, 00:13:11.109 "supported_io_types": { 00:13:11.109 "read": true, 00:13:11.109 "write": true, 00:13:11.109 "unmap": true, 00:13:11.109 "flush": true, 00:13:11.109 "reset": true, 00:13:11.109 "nvme_admin": false, 00:13:11.109 "nvme_io": false, 00:13:11.109 "nvme_io_md": false, 00:13:11.109 "write_zeroes": true, 00:13:11.109 "zcopy": true, 00:13:11.109 "get_zone_info": false, 00:13:11.109 "zone_management": false, 00:13:11.109 "zone_append": false, 00:13:11.109 "compare": false, 00:13:11.109 "compare_and_write": false, 00:13:11.109 "abort": true, 00:13:11.109 "seek_hole": false, 00:13:11.109 "seek_data": false, 00:13:11.109 "copy": true, 00:13:11.109 "nvme_iov_md": false 00:13:11.109 }, 00:13:11.109 "memory_domains": [ 00:13:11.109 { 00:13:11.109 "dma_device_id": "system", 00:13:11.109 "dma_device_type": 1 00:13:11.109 }, 00:13:11.109 { 00:13:11.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.109 "dma_device_type": 2 00:13:11.109 } 00:13:11.109 ], 00:13:11.109 "driver_specific": {} 00:13:11.109 }' 00:13:11.109 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:11.109 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:11.109 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:11.109 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:11.109 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:11.109 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:11.109 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:11.109 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:11.109 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:11.109 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:11.109 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:11.109 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:11.109 17:32:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:11.366 [2024-07-15 17:32:07.026605] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:11.366 [2024-07-15 17:32:07.026626] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:11.366 [2024-07-15 17:32:07.026666] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:11.366 [2024-07-15 17:32:07.026681] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:11.366 [2024-07-15 17:32:07.026685] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x301c4834f00 name Existed_Raid, state offline 00:13:11.366 17:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 58367 00:13:11.366 17:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 58367 ']' 00:13:11.366 17:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 58367 00:13:11.366 17:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:13:11.367 17:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:13:11.367 17:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 58367 00:13:11.367 17:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:13:11.367 17:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:13:11.367 killing process with pid 58367 00:13:11.367 17:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:13:11.367 17:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58367' 00:13:11.367 17:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 58367 00:13:11.367 [2024-07-15 17:32:07.053327] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:11.367 17:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 58367 00:13:11.367 [2024-07-15 17:32:07.076765] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:13:11.624 ************************************ 00:13:11.624 END TEST raid_state_function_test 00:13:11.624 ************************************ 00:13:11.624 00:13:11.624 real 0m27.422s 00:13:11.624 user 0m50.272s 00:13:11.624 sys 0m3.726s 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.624 17:32:07 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:11.624 17:32:07 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:13:11.624 17:32:07 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:11.624 17:32:07 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:11.624 17:32:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:11.624 ************************************ 00:13:11.624 START TEST raid_state_function_test_sb 00:13:11.624 ************************************ 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 true 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=59186 00:13:11.624 Process raid pid: 59186 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 59186' 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 59186 /var/tmp/spdk-raid.sock 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 59186 ']' 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:11.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:11.624 17:32:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:11.624 [2024-07-15 17:32:07.318527] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:13:11.624 [2024-07-15 17:32:07.318741] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:13:12.189 EAL: TSC is not safe to use in SMP mode 00:13:12.189 EAL: TSC is not invariant 00:13:12.189 [2024-07-15 17:32:07.856194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.189 [2024-07-15 17:32:07.941348] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:12.189 [2024-07-15 17:32:07.943486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.189 [2024-07-15 17:32:07.944263] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:12.189 [2024-07-15 17:32:07.944269] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:12.756 17:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:12.756 17:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:13:12.756 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:13.036 [2024-07-15 17:32:08.636920] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:13.036 [2024-07-15 17:32:08.637026] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:13.036 [2024-07-15 17:32:08.637032] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:13.036 [2024-07-15 17:32:08.637040] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:13.036 [2024-07-15 17:32:08.637044] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:13.036 [2024-07-15 17:32:08.637051] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:13.036 [2024-07-15 17:32:08.637055] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:13.036 [2024-07-15 17:32:08.637062] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:13.036 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:13.036 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:13.036 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:13.036 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:13.036 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:13.036 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:13.036 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:13.036 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:13.036 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:13.036 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:13.036 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:13.036 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.293 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:13.293 "name": "Existed_Raid", 00:13:13.293 "uuid": "295d0b6c-42d0-11ef-96ac-773515fba644", 00:13:13.293 "strip_size_kb": 64, 00:13:13.293 "state": "configuring", 00:13:13.293 "raid_level": "raid0", 00:13:13.293 "superblock": true, 00:13:13.293 "num_base_bdevs": 4, 00:13:13.293 "num_base_bdevs_discovered": 0, 00:13:13.293 "num_base_bdevs_operational": 4, 00:13:13.293 "base_bdevs_list": [ 00:13:13.293 { 00:13:13.293 "name": "BaseBdev1", 00:13:13.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.293 "is_configured": false, 00:13:13.293 "data_offset": 0, 00:13:13.293 "data_size": 0 00:13:13.293 }, 00:13:13.293 { 00:13:13.293 "name": "BaseBdev2", 00:13:13.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.293 "is_configured": false, 00:13:13.293 "data_offset": 0, 00:13:13.293 "data_size": 0 00:13:13.293 }, 00:13:13.293 { 00:13:13.293 "name": "BaseBdev3", 00:13:13.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.293 "is_configured": false, 00:13:13.293 "data_offset": 0, 00:13:13.293 "data_size": 0 00:13:13.293 }, 00:13:13.293 { 00:13:13.293 "name": "BaseBdev4", 00:13:13.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.293 "is_configured": false, 00:13:13.293 "data_offset": 0, 00:13:13.293 "data_size": 0 00:13:13.293 } 00:13:13.293 ] 00:13:13.293 }' 00:13:13.293 17:32:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:13.293 17:32:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:13.551 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:13.809 [2024-07-15 17:32:09.552949] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:13.809 [2024-07-15 17:32:09.552973] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17df59834500 name Existed_Raid, state configuring 00:13:13.809 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:14.068 [2024-07-15 17:32:09.829036] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:14.068 [2024-07-15 17:32:09.829094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:14.068 [2024-07-15 17:32:09.829099] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:14.068 [2024-07-15 17:32:09.829124] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:14.068 [2024-07-15 17:32:09.829127] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:14.068 [2024-07-15 17:32:09.829134] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:14.068 [2024-07-15 17:32:09.829137] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:14.068 [2024-07-15 17:32:09.829144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:14.068 17:32:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:14.327 [2024-07-15 17:32:10.070025] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:14.327 BaseBdev1 00:13:14.327 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:14.327 17:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:14.327 17:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:14.327 17:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:14.327 17:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:14.327 17:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:14.327 17:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:14.585 17:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:14.843 [ 00:13:14.843 { 00:13:14.843 "name": "BaseBdev1", 00:13:14.843 "aliases": [ 00:13:14.843 "2a3791a3-42d0-11ef-96ac-773515fba644" 00:13:14.843 ], 00:13:14.843 "product_name": "Malloc disk", 00:13:14.843 "block_size": 512, 00:13:14.843 "num_blocks": 65536, 00:13:14.843 "uuid": "2a3791a3-42d0-11ef-96ac-773515fba644", 00:13:14.843 "assigned_rate_limits": { 00:13:14.843 "rw_ios_per_sec": 0, 00:13:14.843 "rw_mbytes_per_sec": 0, 00:13:14.843 "r_mbytes_per_sec": 0, 00:13:14.843 "w_mbytes_per_sec": 0 00:13:14.843 }, 00:13:14.843 "claimed": true, 00:13:14.843 "claim_type": "exclusive_write", 00:13:14.843 "zoned": false, 00:13:14.843 "supported_io_types": { 00:13:14.843 "read": true, 00:13:14.843 "write": true, 00:13:14.843 "unmap": true, 00:13:14.843 "flush": true, 00:13:14.843 "reset": true, 00:13:14.843 "nvme_admin": false, 00:13:14.843 "nvme_io": false, 00:13:14.844 "nvme_io_md": false, 00:13:14.844 "write_zeroes": true, 00:13:14.844 "zcopy": true, 00:13:14.844 "get_zone_info": false, 00:13:14.844 "zone_management": false, 00:13:14.844 "zone_append": false, 00:13:14.844 "compare": false, 00:13:14.844 "compare_and_write": false, 00:13:14.844 "abort": true, 00:13:14.844 "seek_hole": false, 00:13:14.844 "seek_data": false, 00:13:14.844 "copy": true, 00:13:14.844 "nvme_iov_md": false 00:13:14.844 }, 00:13:14.844 "memory_domains": [ 00:13:14.844 { 00:13:14.844 "dma_device_id": "system", 00:13:14.844 "dma_device_type": 1 00:13:14.844 }, 00:13:14.844 { 00:13:14.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:14.844 "dma_device_type": 2 00:13:14.844 } 00:13:14.844 ], 00:13:14.844 "driver_specific": {} 00:13:14.844 } 00:13:14.844 ] 00:13:14.844 17:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:14.844 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:14.844 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:14.844 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:14.844 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:14.844 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:14.844 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:14.844 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:14.844 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:14.844 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:14.844 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:14.844 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:14.844 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.102 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:15.102 "name": "Existed_Raid", 00:13:15.102 "uuid": "2a12f285-42d0-11ef-96ac-773515fba644", 00:13:15.102 "strip_size_kb": 64, 00:13:15.102 "state": "configuring", 00:13:15.102 "raid_level": "raid0", 00:13:15.102 "superblock": true, 00:13:15.102 "num_base_bdevs": 4, 00:13:15.102 "num_base_bdevs_discovered": 1, 00:13:15.102 "num_base_bdevs_operational": 4, 00:13:15.102 "base_bdevs_list": [ 00:13:15.102 { 00:13:15.102 "name": "BaseBdev1", 00:13:15.102 "uuid": "2a3791a3-42d0-11ef-96ac-773515fba644", 00:13:15.102 "is_configured": true, 00:13:15.102 "data_offset": 2048, 00:13:15.102 "data_size": 63488 00:13:15.102 }, 00:13:15.102 { 00:13:15.102 "name": "BaseBdev2", 00:13:15.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.102 "is_configured": false, 00:13:15.102 "data_offset": 0, 00:13:15.102 "data_size": 0 00:13:15.102 }, 00:13:15.102 { 00:13:15.102 "name": "BaseBdev3", 00:13:15.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.102 "is_configured": false, 00:13:15.102 "data_offset": 0, 00:13:15.102 "data_size": 0 00:13:15.102 }, 00:13:15.102 { 00:13:15.102 "name": "BaseBdev4", 00:13:15.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.102 "is_configured": false, 00:13:15.102 "data_offset": 0, 00:13:15.102 "data_size": 0 00:13:15.102 } 00:13:15.102 ] 00:13:15.102 }' 00:13:15.102 17:32:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:15.102 17:32:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:15.360 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:15.618 [2024-07-15 17:32:11.373144] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:15.618 [2024-07-15 17:32:11.373175] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17df59834500 name Existed_Raid, state configuring 00:13:15.618 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:15.876 [2024-07-15 17:32:11.665171] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:15.876 [2024-07-15 17:32:11.666017] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:15.876 [2024-07-15 17:32:11.666058] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:15.876 [2024-07-15 17:32:11.666063] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:15.876 [2024-07-15 17:32:11.666083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:15.876 [2024-07-15 17:32:11.666087] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:15.876 [2024-07-15 17:32:11.666094] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:15.876 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:15.876 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:15.876 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:15.876 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:15.876 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:15.876 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:15.876 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:15.876 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:15.876 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:15.876 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:15.876 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:15.876 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:15.876 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:15.876 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:16.443 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:16.443 "name": "Existed_Raid", 00:13:16.443 "uuid": "2b2b1e54-42d0-11ef-96ac-773515fba644", 00:13:16.443 "strip_size_kb": 64, 00:13:16.443 "state": "configuring", 00:13:16.443 "raid_level": "raid0", 00:13:16.443 "superblock": true, 00:13:16.443 "num_base_bdevs": 4, 00:13:16.443 "num_base_bdevs_discovered": 1, 00:13:16.443 "num_base_bdevs_operational": 4, 00:13:16.443 "base_bdevs_list": [ 00:13:16.443 { 00:13:16.443 "name": "BaseBdev1", 00:13:16.443 "uuid": "2a3791a3-42d0-11ef-96ac-773515fba644", 00:13:16.443 "is_configured": true, 00:13:16.443 "data_offset": 2048, 00:13:16.443 "data_size": 63488 00:13:16.443 }, 00:13:16.443 { 00:13:16.443 "name": "BaseBdev2", 00:13:16.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.443 "is_configured": false, 00:13:16.443 "data_offset": 0, 00:13:16.443 "data_size": 0 00:13:16.443 }, 00:13:16.443 { 00:13:16.443 "name": "BaseBdev3", 00:13:16.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.443 "is_configured": false, 00:13:16.443 "data_offset": 0, 00:13:16.443 "data_size": 0 00:13:16.443 }, 00:13:16.443 { 00:13:16.443 "name": "BaseBdev4", 00:13:16.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:16.443 "is_configured": false, 00:13:16.443 "data_offset": 0, 00:13:16.443 "data_size": 0 00:13:16.443 } 00:13:16.443 ] 00:13:16.443 }' 00:13:16.443 17:32:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:16.443 17:32:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.702 17:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:16.702 [2024-07-15 17:32:12.529399] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:16.960 BaseBdev2 00:13:16.960 17:32:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:16.960 17:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:16.960 17:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:16.960 17:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:16.960 17:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:16.960 17:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:16.960 17:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:16.960 17:32:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:17.218 [ 00:13:17.218 { 00:13:17.218 "name": "BaseBdev2", 00:13:17.218 "aliases": [ 00:13:17.218 "2baef803-42d0-11ef-96ac-773515fba644" 00:13:17.218 ], 00:13:17.218 "product_name": "Malloc disk", 00:13:17.218 "block_size": 512, 00:13:17.218 "num_blocks": 65536, 00:13:17.218 "uuid": "2baef803-42d0-11ef-96ac-773515fba644", 00:13:17.218 "assigned_rate_limits": { 00:13:17.218 "rw_ios_per_sec": 0, 00:13:17.218 "rw_mbytes_per_sec": 0, 00:13:17.218 "r_mbytes_per_sec": 0, 00:13:17.218 "w_mbytes_per_sec": 0 00:13:17.218 }, 00:13:17.218 "claimed": true, 00:13:17.218 "claim_type": "exclusive_write", 00:13:17.218 "zoned": false, 00:13:17.218 "supported_io_types": { 00:13:17.218 "read": true, 00:13:17.218 "write": true, 00:13:17.218 "unmap": true, 00:13:17.218 "flush": true, 00:13:17.218 "reset": true, 00:13:17.218 "nvme_admin": false, 00:13:17.218 "nvme_io": false, 00:13:17.218 "nvme_io_md": false, 00:13:17.218 "write_zeroes": true, 00:13:17.218 "zcopy": true, 00:13:17.218 "get_zone_info": false, 00:13:17.218 "zone_management": false, 00:13:17.218 "zone_append": false, 00:13:17.218 "compare": false, 00:13:17.218 "compare_and_write": false, 00:13:17.218 "abort": true, 00:13:17.218 "seek_hole": false, 00:13:17.218 "seek_data": false, 00:13:17.218 "copy": true, 00:13:17.218 "nvme_iov_md": false 00:13:17.218 }, 00:13:17.218 "memory_domains": [ 00:13:17.218 { 00:13:17.218 "dma_device_id": "system", 00:13:17.218 "dma_device_type": 1 00:13:17.218 }, 00:13:17.218 { 00:13:17.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.218 "dma_device_type": 2 00:13:17.218 } 00:13:17.218 ], 00:13:17.218 "driver_specific": {} 00:13:17.218 } 00:13:17.218 ] 00:13:17.218 17:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:17.218 17:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:17.218 17:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:17.218 17:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:17.218 17:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:17.218 17:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:17.218 17:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:17.218 17:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:17.218 17:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:17.218 17:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:17.218 17:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:17.218 17:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:17.218 17:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:17.218 17:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:17.218 17:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.476 17:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:17.476 "name": "Existed_Raid", 00:13:17.476 "uuid": "2b2b1e54-42d0-11ef-96ac-773515fba644", 00:13:17.476 "strip_size_kb": 64, 00:13:17.476 "state": "configuring", 00:13:17.476 "raid_level": "raid0", 00:13:17.476 "superblock": true, 00:13:17.476 "num_base_bdevs": 4, 00:13:17.476 "num_base_bdevs_discovered": 2, 00:13:17.476 "num_base_bdevs_operational": 4, 00:13:17.476 "base_bdevs_list": [ 00:13:17.476 { 00:13:17.476 "name": "BaseBdev1", 00:13:17.476 "uuid": "2a3791a3-42d0-11ef-96ac-773515fba644", 00:13:17.476 "is_configured": true, 00:13:17.476 "data_offset": 2048, 00:13:17.476 "data_size": 63488 00:13:17.476 }, 00:13:17.476 { 00:13:17.476 "name": "BaseBdev2", 00:13:17.476 "uuid": "2baef803-42d0-11ef-96ac-773515fba644", 00:13:17.476 "is_configured": true, 00:13:17.476 "data_offset": 2048, 00:13:17.476 "data_size": 63488 00:13:17.476 }, 00:13:17.476 { 00:13:17.476 "name": "BaseBdev3", 00:13:17.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.476 "is_configured": false, 00:13:17.476 "data_offset": 0, 00:13:17.476 "data_size": 0 00:13:17.476 }, 00:13:17.476 { 00:13:17.476 "name": "BaseBdev4", 00:13:17.476 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.476 "is_configured": false, 00:13:17.476 "data_offset": 0, 00:13:17.476 "data_size": 0 00:13:17.476 } 00:13:17.476 ] 00:13:17.476 }' 00:13:17.476 17:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:17.476 17:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.040 17:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:18.041 [2024-07-15 17:32:13.837433] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:18.041 BaseBdev3 00:13:18.041 17:32:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:13:18.041 17:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:13:18.041 17:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:18.041 17:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:18.041 17:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:18.041 17:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:18.041 17:32:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:18.298 17:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:18.557 [ 00:13:18.557 { 00:13:18.557 "name": "BaseBdev3", 00:13:18.557 "aliases": [ 00:13:18.557 "2c769004-42d0-11ef-96ac-773515fba644" 00:13:18.557 ], 00:13:18.557 "product_name": "Malloc disk", 00:13:18.557 "block_size": 512, 00:13:18.557 "num_blocks": 65536, 00:13:18.557 "uuid": "2c769004-42d0-11ef-96ac-773515fba644", 00:13:18.557 "assigned_rate_limits": { 00:13:18.557 "rw_ios_per_sec": 0, 00:13:18.557 "rw_mbytes_per_sec": 0, 00:13:18.557 "r_mbytes_per_sec": 0, 00:13:18.557 "w_mbytes_per_sec": 0 00:13:18.557 }, 00:13:18.557 "claimed": true, 00:13:18.557 "claim_type": "exclusive_write", 00:13:18.557 "zoned": false, 00:13:18.557 "supported_io_types": { 00:13:18.557 "read": true, 00:13:18.557 "write": true, 00:13:18.557 "unmap": true, 00:13:18.557 "flush": true, 00:13:18.557 "reset": true, 00:13:18.557 "nvme_admin": false, 00:13:18.557 "nvme_io": false, 00:13:18.557 "nvme_io_md": false, 00:13:18.557 "write_zeroes": true, 00:13:18.557 "zcopy": true, 00:13:18.557 "get_zone_info": false, 00:13:18.557 "zone_management": false, 00:13:18.557 "zone_append": false, 00:13:18.557 "compare": false, 00:13:18.557 "compare_and_write": false, 00:13:18.557 "abort": true, 00:13:18.557 "seek_hole": false, 00:13:18.557 "seek_data": false, 00:13:18.557 "copy": true, 00:13:18.557 "nvme_iov_md": false 00:13:18.557 }, 00:13:18.557 "memory_domains": [ 00:13:18.557 { 00:13:18.557 "dma_device_id": "system", 00:13:18.557 "dma_device_type": 1 00:13:18.557 }, 00:13:18.557 { 00:13:18.557 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:18.557 "dma_device_type": 2 00:13:18.557 } 00:13:18.557 ], 00:13:18.557 "driver_specific": {} 00:13:18.557 } 00:13:18.557 ] 00:13:18.557 17:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:18.557 17:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:18.557 17:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:18.557 17:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:18.557 17:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:18.557 17:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:18.557 17:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:18.557 17:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:18.557 17:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:18.557 17:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:18.557 17:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:18.557 17:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:18.557 17:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:18.557 17:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:18.557 17:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.815 17:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:18.815 "name": "Existed_Raid", 00:13:18.815 "uuid": "2b2b1e54-42d0-11ef-96ac-773515fba644", 00:13:18.815 "strip_size_kb": 64, 00:13:18.815 "state": "configuring", 00:13:18.815 "raid_level": "raid0", 00:13:18.815 "superblock": true, 00:13:18.815 "num_base_bdevs": 4, 00:13:18.815 "num_base_bdevs_discovered": 3, 00:13:18.815 "num_base_bdevs_operational": 4, 00:13:18.815 "base_bdevs_list": [ 00:13:18.815 { 00:13:18.815 "name": "BaseBdev1", 00:13:18.815 "uuid": "2a3791a3-42d0-11ef-96ac-773515fba644", 00:13:18.815 "is_configured": true, 00:13:18.815 "data_offset": 2048, 00:13:18.815 "data_size": 63488 00:13:18.815 }, 00:13:18.815 { 00:13:18.815 "name": "BaseBdev2", 00:13:18.815 "uuid": "2baef803-42d0-11ef-96ac-773515fba644", 00:13:18.815 "is_configured": true, 00:13:18.815 "data_offset": 2048, 00:13:18.815 "data_size": 63488 00:13:18.815 }, 00:13:18.815 { 00:13:18.815 "name": "BaseBdev3", 00:13:18.815 "uuid": "2c769004-42d0-11ef-96ac-773515fba644", 00:13:18.815 "is_configured": true, 00:13:18.815 "data_offset": 2048, 00:13:18.815 "data_size": 63488 00:13:18.815 }, 00:13:18.815 { 00:13:18.815 "name": "BaseBdev4", 00:13:18.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.815 "is_configured": false, 00:13:18.815 "data_offset": 0, 00:13:18.815 "data_size": 0 00:13:18.815 } 00:13:18.815 ] 00:13:18.815 }' 00:13:18.816 17:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:18.816 17:32:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:19.074 17:32:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:13:19.331 [2024-07-15 17:32:15.149469] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:19.331 [2024-07-15 17:32:15.149543] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x17df59834a00 00:13:19.331 [2024-07-15 17:32:15.149550] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:19.331 [2024-07-15 17:32:15.149572] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17df59897e20 00:13:19.331 [2024-07-15 17:32:15.149628] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x17df59834a00 00:13:19.331 [2024-07-15 17:32:15.149633] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x17df59834a00 00:13:19.331 [2024-07-15 17:32:15.149653] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:19.331 BaseBdev4 00:13:19.589 17:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:13:19.589 17:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:13:19.589 17:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:19.589 17:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:19.589 17:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:19.589 17:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:19.589 17:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:19.847 17:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:20.105 [ 00:13:20.105 { 00:13:20.105 "name": "BaseBdev4", 00:13:20.105 "aliases": [ 00:13:20.105 "2d3ec399-42d0-11ef-96ac-773515fba644" 00:13:20.105 ], 00:13:20.105 "product_name": "Malloc disk", 00:13:20.105 "block_size": 512, 00:13:20.105 "num_blocks": 65536, 00:13:20.105 "uuid": "2d3ec399-42d0-11ef-96ac-773515fba644", 00:13:20.105 "assigned_rate_limits": { 00:13:20.105 "rw_ios_per_sec": 0, 00:13:20.105 "rw_mbytes_per_sec": 0, 00:13:20.105 "r_mbytes_per_sec": 0, 00:13:20.105 "w_mbytes_per_sec": 0 00:13:20.105 }, 00:13:20.105 "claimed": true, 00:13:20.105 "claim_type": "exclusive_write", 00:13:20.105 "zoned": false, 00:13:20.105 "supported_io_types": { 00:13:20.105 "read": true, 00:13:20.105 "write": true, 00:13:20.105 "unmap": true, 00:13:20.105 "flush": true, 00:13:20.105 "reset": true, 00:13:20.105 "nvme_admin": false, 00:13:20.105 "nvme_io": false, 00:13:20.105 "nvme_io_md": false, 00:13:20.105 "write_zeroes": true, 00:13:20.105 "zcopy": true, 00:13:20.105 "get_zone_info": false, 00:13:20.105 "zone_management": false, 00:13:20.105 "zone_append": false, 00:13:20.105 "compare": false, 00:13:20.105 "compare_and_write": false, 00:13:20.105 "abort": true, 00:13:20.105 "seek_hole": false, 00:13:20.105 "seek_data": false, 00:13:20.105 "copy": true, 00:13:20.105 "nvme_iov_md": false 00:13:20.105 }, 00:13:20.105 "memory_domains": [ 00:13:20.105 { 00:13:20.105 "dma_device_id": "system", 00:13:20.105 "dma_device_type": 1 00:13:20.105 }, 00:13:20.105 { 00:13:20.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.105 "dma_device_type": 2 00:13:20.105 } 00:13:20.105 ], 00:13:20.105 "driver_specific": {} 00:13:20.105 } 00:13:20.105 ] 00:13:20.105 17:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:20.105 17:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:20.105 17:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:20.105 17:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:20.105 17:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:20.105 17:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:20.105 17:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:20.105 17:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:20.105 17:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:20.105 17:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:20.105 17:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:20.105 17:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:20.105 17:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:20.105 17:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.105 17:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:20.367 17:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:20.367 "name": "Existed_Raid", 00:13:20.367 "uuid": "2b2b1e54-42d0-11ef-96ac-773515fba644", 00:13:20.367 "strip_size_kb": 64, 00:13:20.367 "state": "online", 00:13:20.367 "raid_level": "raid0", 00:13:20.367 "superblock": true, 00:13:20.367 "num_base_bdevs": 4, 00:13:20.367 "num_base_bdevs_discovered": 4, 00:13:20.367 "num_base_bdevs_operational": 4, 00:13:20.367 "base_bdevs_list": [ 00:13:20.367 { 00:13:20.367 "name": "BaseBdev1", 00:13:20.367 "uuid": "2a3791a3-42d0-11ef-96ac-773515fba644", 00:13:20.367 "is_configured": true, 00:13:20.367 "data_offset": 2048, 00:13:20.367 "data_size": 63488 00:13:20.367 }, 00:13:20.367 { 00:13:20.367 "name": "BaseBdev2", 00:13:20.367 "uuid": "2baef803-42d0-11ef-96ac-773515fba644", 00:13:20.367 "is_configured": true, 00:13:20.367 "data_offset": 2048, 00:13:20.367 "data_size": 63488 00:13:20.367 }, 00:13:20.367 { 00:13:20.367 "name": "BaseBdev3", 00:13:20.367 "uuid": "2c769004-42d0-11ef-96ac-773515fba644", 00:13:20.367 "is_configured": true, 00:13:20.367 "data_offset": 2048, 00:13:20.367 "data_size": 63488 00:13:20.367 }, 00:13:20.367 { 00:13:20.367 "name": "BaseBdev4", 00:13:20.367 "uuid": "2d3ec399-42d0-11ef-96ac-773515fba644", 00:13:20.367 "is_configured": true, 00:13:20.367 "data_offset": 2048, 00:13:20.367 "data_size": 63488 00:13:20.367 } 00:13:20.367 ] 00:13:20.367 }' 00:13:20.367 17:32:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:20.367 17:32:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.655 17:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:20.655 17:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:20.655 17:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:20.655 17:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:20.655 17:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:20.655 17:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:13:20.655 17:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:20.655 17:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:20.914 [2024-07-15 17:32:16.569412] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:20.914 17:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:20.914 "name": "Existed_Raid", 00:13:20.914 "aliases": [ 00:13:20.914 "2b2b1e54-42d0-11ef-96ac-773515fba644" 00:13:20.914 ], 00:13:20.914 "product_name": "Raid Volume", 00:13:20.914 "block_size": 512, 00:13:20.914 "num_blocks": 253952, 00:13:20.914 "uuid": "2b2b1e54-42d0-11ef-96ac-773515fba644", 00:13:20.914 "assigned_rate_limits": { 00:13:20.914 "rw_ios_per_sec": 0, 00:13:20.914 "rw_mbytes_per_sec": 0, 00:13:20.914 "r_mbytes_per_sec": 0, 00:13:20.914 "w_mbytes_per_sec": 0 00:13:20.914 }, 00:13:20.914 "claimed": false, 00:13:20.914 "zoned": false, 00:13:20.914 "supported_io_types": { 00:13:20.914 "read": true, 00:13:20.914 "write": true, 00:13:20.914 "unmap": true, 00:13:20.914 "flush": true, 00:13:20.914 "reset": true, 00:13:20.914 "nvme_admin": false, 00:13:20.914 "nvme_io": false, 00:13:20.914 "nvme_io_md": false, 00:13:20.914 "write_zeroes": true, 00:13:20.914 "zcopy": false, 00:13:20.914 "get_zone_info": false, 00:13:20.914 "zone_management": false, 00:13:20.914 "zone_append": false, 00:13:20.914 "compare": false, 00:13:20.914 "compare_and_write": false, 00:13:20.914 "abort": false, 00:13:20.914 "seek_hole": false, 00:13:20.914 "seek_data": false, 00:13:20.914 "copy": false, 00:13:20.914 "nvme_iov_md": false 00:13:20.914 }, 00:13:20.914 "memory_domains": [ 00:13:20.914 { 00:13:20.914 "dma_device_id": "system", 00:13:20.914 "dma_device_type": 1 00:13:20.914 }, 00:13:20.914 { 00:13:20.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.914 "dma_device_type": 2 00:13:20.914 }, 00:13:20.914 { 00:13:20.914 "dma_device_id": "system", 00:13:20.914 "dma_device_type": 1 00:13:20.914 }, 00:13:20.914 { 00:13:20.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.915 "dma_device_type": 2 00:13:20.915 }, 00:13:20.915 { 00:13:20.915 "dma_device_id": "system", 00:13:20.915 "dma_device_type": 1 00:13:20.915 }, 00:13:20.915 { 00:13:20.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.915 "dma_device_type": 2 00:13:20.915 }, 00:13:20.915 { 00:13:20.915 "dma_device_id": "system", 00:13:20.915 "dma_device_type": 1 00:13:20.915 }, 00:13:20.915 { 00:13:20.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.915 "dma_device_type": 2 00:13:20.915 } 00:13:20.915 ], 00:13:20.915 "driver_specific": { 00:13:20.915 "raid": { 00:13:20.915 "uuid": "2b2b1e54-42d0-11ef-96ac-773515fba644", 00:13:20.915 "strip_size_kb": 64, 00:13:20.915 "state": "online", 00:13:20.915 "raid_level": "raid0", 00:13:20.915 "superblock": true, 00:13:20.915 "num_base_bdevs": 4, 00:13:20.915 "num_base_bdevs_discovered": 4, 00:13:20.915 "num_base_bdevs_operational": 4, 00:13:20.915 "base_bdevs_list": [ 00:13:20.915 { 00:13:20.915 "name": "BaseBdev1", 00:13:20.915 "uuid": "2a3791a3-42d0-11ef-96ac-773515fba644", 00:13:20.915 "is_configured": true, 00:13:20.915 "data_offset": 2048, 00:13:20.915 "data_size": 63488 00:13:20.915 }, 00:13:20.915 { 00:13:20.915 "name": "BaseBdev2", 00:13:20.915 "uuid": "2baef803-42d0-11ef-96ac-773515fba644", 00:13:20.915 "is_configured": true, 00:13:20.915 "data_offset": 2048, 00:13:20.915 "data_size": 63488 00:13:20.915 }, 00:13:20.915 { 00:13:20.915 "name": "BaseBdev3", 00:13:20.915 "uuid": "2c769004-42d0-11ef-96ac-773515fba644", 00:13:20.915 "is_configured": true, 00:13:20.915 "data_offset": 2048, 00:13:20.915 "data_size": 63488 00:13:20.915 }, 00:13:20.915 { 00:13:20.915 "name": "BaseBdev4", 00:13:20.915 "uuid": "2d3ec399-42d0-11ef-96ac-773515fba644", 00:13:20.915 "is_configured": true, 00:13:20.915 "data_offset": 2048, 00:13:20.915 "data_size": 63488 00:13:20.915 } 00:13:20.915 ] 00:13:20.915 } 00:13:20.915 } 00:13:20.915 }' 00:13:20.915 17:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:20.915 17:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:20.915 BaseBdev2 00:13:20.915 BaseBdev3 00:13:20.915 BaseBdev4' 00:13:20.915 17:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:20.915 17:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:20.915 17:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:21.172 17:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:21.172 "name": "BaseBdev1", 00:13:21.172 "aliases": [ 00:13:21.172 "2a3791a3-42d0-11ef-96ac-773515fba644" 00:13:21.172 ], 00:13:21.172 "product_name": "Malloc disk", 00:13:21.172 "block_size": 512, 00:13:21.172 "num_blocks": 65536, 00:13:21.172 "uuid": "2a3791a3-42d0-11ef-96ac-773515fba644", 00:13:21.172 "assigned_rate_limits": { 00:13:21.172 "rw_ios_per_sec": 0, 00:13:21.172 "rw_mbytes_per_sec": 0, 00:13:21.172 "r_mbytes_per_sec": 0, 00:13:21.172 "w_mbytes_per_sec": 0 00:13:21.172 }, 00:13:21.173 "claimed": true, 00:13:21.173 "claim_type": "exclusive_write", 00:13:21.173 "zoned": false, 00:13:21.173 "supported_io_types": { 00:13:21.173 "read": true, 00:13:21.173 "write": true, 00:13:21.173 "unmap": true, 00:13:21.173 "flush": true, 00:13:21.173 "reset": true, 00:13:21.173 "nvme_admin": false, 00:13:21.173 "nvme_io": false, 00:13:21.173 "nvme_io_md": false, 00:13:21.173 "write_zeroes": true, 00:13:21.173 "zcopy": true, 00:13:21.173 "get_zone_info": false, 00:13:21.173 "zone_management": false, 00:13:21.173 "zone_append": false, 00:13:21.173 "compare": false, 00:13:21.173 "compare_and_write": false, 00:13:21.173 "abort": true, 00:13:21.173 "seek_hole": false, 00:13:21.173 "seek_data": false, 00:13:21.173 "copy": true, 00:13:21.173 "nvme_iov_md": false 00:13:21.173 }, 00:13:21.173 "memory_domains": [ 00:13:21.173 { 00:13:21.173 "dma_device_id": "system", 00:13:21.173 "dma_device_type": 1 00:13:21.173 }, 00:13:21.173 { 00:13:21.173 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.173 "dma_device_type": 2 00:13:21.173 } 00:13:21.173 ], 00:13:21.173 "driver_specific": {} 00:13:21.173 }' 00:13:21.173 17:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:21.173 17:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:21.173 17:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:21.173 17:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:21.173 17:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:21.173 17:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:21.173 17:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:21.173 17:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:21.173 17:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:21.173 17:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:21.173 17:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:21.173 17:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:21.173 17:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:21.173 17:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:21.173 17:32:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:21.431 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:21.431 "name": "BaseBdev2", 00:13:21.431 "aliases": [ 00:13:21.431 "2baef803-42d0-11ef-96ac-773515fba644" 00:13:21.431 ], 00:13:21.431 "product_name": "Malloc disk", 00:13:21.431 "block_size": 512, 00:13:21.431 "num_blocks": 65536, 00:13:21.431 "uuid": "2baef803-42d0-11ef-96ac-773515fba644", 00:13:21.431 "assigned_rate_limits": { 00:13:21.431 "rw_ios_per_sec": 0, 00:13:21.431 "rw_mbytes_per_sec": 0, 00:13:21.431 "r_mbytes_per_sec": 0, 00:13:21.431 "w_mbytes_per_sec": 0 00:13:21.431 }, 00:13:21.431 "claimed": true, 00:13:21.431 "claim_type": "exclusive_write", 00:13:21.431 "zoned": false, 00:13:21.431 "supported_io_types": { 00:13:21.431 "read": true, 00:13:21.431 "write": true, 00:13:21.431 "unmap": true, 00:13:21.431 "flush": true, 00:13:21.431 "reset": true, 00:13:21.431 "nvme_admin": false, 00:13:21.431 "nvme_io": false, 00:13:21.431 "nvme_io_md": false, 00:13:21.431 "write_zeroes": true, 00:13:21.431 "zcopy": true, 00:13:21.431 "get_zone_info": false, 00:13:21.431 "zone_management": false, 00:13:21.431 "zone_append": false, 00:13:21.431 "compare": false, 00:13:21.431 "compare_and_write": false, 00:13:21.431 "abort": true, 00:13:21.431 "seek_hole": false, 00:13:21.431 "seek_data": false, 00:13:21.431 "copy": true, 00:13:21.431 "nvme_iov_md": false 00:13:21.431 }, 00:13:21.431 "memory_domains": [ 00:13:21.431 { 00:13:21.431 "dma_device_id": "system", 00:13:21.431 "dma_device_type": 1 00:13:21.431 }, 00:13:21.431 { 00:13:21.431 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.431 "dma_device_type": 2 00:13:21.431 } 00:13:21.431 ], 00:13:21.431 "driver_specific": {} 00:13:21.431 }' 00:13:21.431 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:21.431 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:21.431 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:21.431 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:21.431 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:21.431 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:21.431 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:21.431 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:21.431 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:21.431 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:21.432 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:21.432 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:21.432 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:21.432 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:21.432 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:21.690 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:21.690 "name": "BaseBdev3", 00:13:21.690 "aliases": [ 00:13:21.690 "2c769004-42d0-11ef-96ac-773515fba644" 00:13:21.690 ], 00:13:21.690 "product_name": "Malloc disk", 00:13:21.690 "block_size": 512, 00:13:21.690 "num_blocks": 65536, 00:13:21.690 "uuid": "2c769004-42d0-11ef-96ac-773515fba644", 00:13:21.690 "assigned_rate_limits": { 00:13:21.690 "rw_ios_per_sec": 0, 00:13:21.690 "rw_mbytes_per_sec": 0, 00:13:21.690 "r_mbytes_per_sec": 0, 00:13:21.690 "w_mbytes_per_sec": 0 00:13:21.690 }, 00:13:21.690 "claimed": true, 00:13:21.690 "claim_type": "exclusive_write", 00:13:21.690 "zoned": false, 00:13:21.690 "supported_io_types": { 00:13:21.690 "read": true, 00:13:21.690 "write": true, 00:13:21.690 "unmap": true, 00:13:21.690 "flush": true, 00:13:21.690 "reset": true, 00:13:21.690 "nvme_admin": false, 00:13:21.690 "nvme_io": false, 00:13:21.690 "nvme_io_md": false, 00:13:21.690 "write_zeroes": true, 00:13:21.690 "zcopy": true, 00:13:21.690 "get_zone_info": false, 00:13:21.690 "zone_management": false, 00:13:21.690 "zone_append": false, 00:13:21.690 "compare": false, 00:13:21.690 "compare_and_write": false, 00:13:21.690 "abort": true, 00:13:21.690 "seek_hole": false, 00:13:21.690 "seek_data": false, 00:13:21.690 "copy": true, 00:13:21.690 "nvme_iov_md": false 00:13:21.690 }, 00:13:21.690 "memory_domains": [ 00:13:21.690 { 00:13:21.690 "dma_device_id": "system", 00:13:21.690 "dma_device_type": 1 00:13:21.690 }, 00:13:21.690 { 00:13:21.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:21.690 "dma_device_type": 2 00:13:21.690 } 00:13:21.690 ], 00:13:21.690 "driver_specific": {} 00:13:21.690 }' 00:13:21.690 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:21.690 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:21.690 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:21.690 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:21.690 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:21.690 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:21.690 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:21.690 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:21.949 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:21.949 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:21.949 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:21.949 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:21.949 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:21.949 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:13:21.949 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:22.206 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:22.206 "name": "BaseBdev4", 00:13:22.206 "aliases": [ 00:13:22.206 "2d3ec399-42d0-11ef-96ac-773515fba644" 00:13:22.206 ], 00:13:22.206 "product_name": "Malloc disk", 00:13:22.206 "block_size": 512, 00:13:22.206 "num_blocks": 65536, 00:13:22.206 "uuid": "2d3ec399-42d0-11ef-96ac-773515fba644", 00:13:22.206 "assigned_rate_limits": { 00:13:22.206 "rw_ios_per_sec": 0, 00:13:22.206 "rw_mbytes_per_sec": 0, 00:13:22.206 "r_mbytes_per_sec": 0, 00:13:22.206 "w_mbytes_per_sec": 0 00:13:22.206 }, 00:13:22.206 "claimed": true, 00:13:22.206 "claim_type": "exclusive_write", 00:13:22.206 "zoned": false, 00:13:22.206 "supported_io_types": { 00:13:22.206 "read": true, 00:13:22.206 "write": true, 00:13:22.206 "unmap": true, 00:13:22.206 "flush": true, 00:13:22.206 "reset": true, 00:13:22.206 "nvme_admin": false, 00:13:22.206 "nvme_io": false, 00:13:22.206 "nvme_io_md": false, 00:13:22.206 "write_zeroes": true, 00:13:22.206 "zcopy": true, 00:13:22.206 "get_zone_info": false, 00:13:22.207 "zone_management": false, 00:13:22.207 "zone_append": false, 00:13:22.207 "compare": false, 00:13:22.207 "compare_and_write": false, 00:13:22.207 "abort": true, 00:13:22.207 "seek_hole": false, 00:13:22.207 "seek_data": false, 00:13:22.207 "copy": true, 00:13:22.207 "nvme_iov_md": false 00:13:22.207 }, 00:13:22.207 "memory_domains": [ 00:13:22.207 { 00:13:22.207 "dma_device_id": "system", 00:13:22.207 "dma_device_type": 1 00:13:22.207 }, 00:13:22.207 { 00:13:22.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.207 "dma_device_type": 2 00:13:22.207 } 00:13:22.207 ], 00:13:22.207 "driver_specific": {} 00:13:22.207 }' 00:13:22.207 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:22.207 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:22.207 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:22.207 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:22.207 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:22.207 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:22.207 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:22.207 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:22.207 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:22.207 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:22.207 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:22.207 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:22.207 17:32:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:22.465 [2024-07-15 17:32:18.145425] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:22.465 [2024-07-15 17:32:18.145449] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:22.465 [2024-07-15 17:32:18.145463] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:22.465 17:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:22.465 17:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:13:22.465 17:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:22.465 17:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:13:22.465 17:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:13:22.465 17:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:13:22.465 17:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:22.465 17:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:13:22.465 17:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:22.465 17:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:22.465 17:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:22.465 17:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:22.465 17:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:22.465 17:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:22.465 17:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:22.465 17:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:22.465 17:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.723 17:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:22.723 "name": "Existed_Raid", 00:13:22.723 "uuid": "2b2b1e54-42d0-11ef-96ac-773515fba644", 00:13:22.723 "strip_size_kb": 64, 00:13:22.723 "state": "offline", 00:13:22.723 "raid_level": "raid0", 00:13:22.723 "superblock": true, 00:13:22.723 "num_base_bdevs": 4, 00:13:22.723 "num_base_bdevs_discovered": 3, 00:13:22.723 "num_base_bdevs_operational": 3, 00:13:22.723 "base_bdevs_list": [ 00:13:22.723 { 00:13:22.723 "name": null, 00:13:22.723 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.723 "is_configured": false, 00:13:22.723 "data_offset": 2048, 00:13:22.723 "data_size": 63488 00:13:22.723 }, 00:13:22.723 { 00:13:22.723 "name": "BaseBdev2", 00:13:22.723 "uuid": "2baef803-42d0-11ef-96ac-773515fba644", 00:13:22.723 "is_configured": true, 00:13:22.723 "data_offset": 2048, 00:13:22.723 "data_size": 63488 00:13:22.723 }, 00:13:22.723 { 00:13:22.723 "name": "BaseBdev3", 00:13:22.723 "uuid": "2c769004-42d0-11ef-96ac-773515fba644", 00:13:22.723 "is_configured": true, 00:13:22.723 "data_offset": 2048, 00:13:22.723 "data_size": 63488 00:13:22.723 }, 00:13:22.723 { 00:13:22.723 "name": "BaseBdev4", 00:13:22.723 "uuid": "2d3ec399-42d0-11ef-96ac-773515fba644", 00:13:22.723 "is_configured": true, 00:13:22.723 "data_offset": 2048, 00:13:22.723 "data_size": 63488 00:13:22.723 } 00:13:22.723 ] 00:13:22.723 }' 00:13:22.723 17:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:22.723 17:32:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.981 17:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:22.981 17:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:22.981 17:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:22.981 17:32:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:23.238 17:32:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:23.238 17:32:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:23.238 17:32:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:23.496 [2024-07-15 17:32:19.303490] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:23.496 17:32:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:23.496 17:32:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:23.496 17:32:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:23.496 17:32:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:24.063 17:32:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:24.063 17:32:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:24.063 17:32:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:24.063 [2024-07-15 17:32:19.865611] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:24.063 17:32:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:24.063 17:32:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:24.063 17:32:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:24.063 17:32:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:24.641 17:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:24.641 17:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:24.641 17:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:13:24.641 [2024-07-15 17:32:20.431938] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:24.641 [2024-07-15 17:32:20.431996] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17df59834a00 name Existed_Raid, state offline 00:13:24.641 17:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:24.641 17:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:24.641 17:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:24.641 17:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:24.900 17:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:24.900 17:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:24.900 17:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:13:24.900 17:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:13:24.900 17:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:24.900 17:32:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:25.158 BaseBdev2 00:13:25.416 17:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:13:25.416 17:32:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:25.416 17:32:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:25.416 17:32:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:25.416 17:32:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:25.416 17:32:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:25.416 17:32:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:25.674 17:32:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:25.932 [ 00:13:25.932 { 00:13:25.932 "name": "BaseBdev2", 00:13:25.932 "aliases": [ 00:13:25.932 "30b8ca45-42d0-11ef-96ac-773515fba644" 00:13:25.932 ], 00:13:25.932 "product_name": "Malloc disk", 00:13:25.932 "block_size": 512, 00:13:25.932 "num_blocks": 65536, 00:13:25.932 "uuid": "30b8ca45-42d0-11ef-96ac-773515fba644", 00:13:25.932 "assigned_rate_limits": { 00:13:25.932 "rw_ios_per_sec": 0, 00:13:25.932 "rw_mbytes_per_sec": 0, 00:13:25.932 "r_mbytes_per_sec": 0, 00:13:25.932 "w_mbytes_per_sec": 0 00:13:25.932 }, 00:13:25.932 "claimed": false, 00:13:25.932 "zoned": false, 00:13:25.932 "supported_io_types": { 00:13:25.932 "read": true, 00:13:25.932 "write": true, 00:13:25.932 "unmap": true, 00:13:25.932 "flush": true, 00:13:25.932 "reset": true, 00:13:25.932 "nvme_admin": false, 00:13:25.932 "nvme_io": false, 00:13:25.932 "nvme_io_md": false, 00:13:25.932 "write_zeroes": true, 00:13:25.932 "zcopy": true, 00:13:25.932 "get_zone_info": false, 00:13:25.932 "zone_management": false, 00:13:25.932 "zone_append": false, 00:13:25.932 "compare": false, 00:13:25.932 "compare_and_write": false, 00:13:25.932 "abort": true, 00:13:25.932 "seek_hole": false, 00:13:25.932 "seek_data": false, 00:13:25.932 "copy": true, 00:13:25.932 "nvme_iov_md": false 00:13:25.932 }, 00:13:25.932 "memory_domains": [ 00:13:25.932 { 00:13:25.932 "dma_device_id": "system", 00:13:25.932 "dma_device_type": 1 00:13:25.932 }, 00:13:25.932 { 00:13:25.932 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.932 "dma_device_type": 2 00:13:25.932 } 00:13:25.932 ], 00:13:25.932 "driver_specific": {} 00:13:25.932 } 00:13:25.932 ] 00:13:25.932 17:32:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:25.932 17:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:25.932 17:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:25.932 17:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:26.189 BaseBdev3 00:13:26.189 17:32:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:13:26.189 17:32:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:13:26.190 17:32:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:26.190 17:32:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:26.190 17:32:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:26.190 17:32:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:26.190 17:32:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:26.447 17:32:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:26.705 [ 00:13:26.705 { 00:13:26.705 "name": "BaseBdev3", 00:13:26.705 "aliases": [ 00:13:26.705 "3134b06a-42d0-11ef-96ac-773515fba644" 00:13:26.705 ], 00:13:26.705 "product_name": "Malloc disk", 00:13:26.705 "block_size": 512, 00:13:26.705 "num_blocks": 65536, 00:13:26.705 "uuid": "3134b06a-42d0-11ef-96ac-773515fba644", 00:13:26.705 "assigned_rate_limits": { 00:13:26.705 "rw_ios_per_sec": 0, 00:13:26.705 "rw_mbytes_per_sec": 0, 00:13:26.705 "r_mbytes_per_sec": 0, 00:13:26.705 "w_mbytes_per_sec": 0 00:13:26.705 }, 00:13:26.705 "claimed": false, 00:13:26.705 "zoned": false, 00:13:26.705 "supported_io_types": { 00:13:26.705 "read": true, 00:13:26.705 "write": true, 00:13:26.705 "unmap": true, 00:13:26.705 "flush": true, 00:13:26.705 "reset": true, 00:13:26.705 "nvme_admin": false, 00:13:26.705 "nvme_io": false, 00:13:26.705 "nvme_io_md": false, 00:13:26.705 "write_zeroes": true, 00:13:26.705 "zcopy": true, 00:13:26.705 "get_zone_info": false, 00:13:26.705 "zone_management": false, 00:13:26.705 "zone_append": false, 00:13:26.705 "compare": false, 00:13:26.705 "compare_and_write": false, 00:13:26.705 "abort": true, 00:13:26.705 "seek_hole": false, 00:13:26.705 "seek_data": false, 00:13:26.705 "copy": true, 00:13:26.705 "nvme_iov_md": false 00:13:26.705 }, 00:13:26.705 "memory_domains": [ 00:13:26.705 { 00:13:26.705 "dma_device_id": "system", 00:13:26.705 "dma_device_type": 1 00:13:26.705 }, 00:13:26.705 { 00:13:26.705 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.705 "dma_device_type": 2 00:13:26.705 } 00:13:26.705 ], 00:13:26.705 "driver_specific": {} 00:13:26.705 } 00:13:26.705 ] 00:13:26.705 17:32:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:26.705 17:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:26.705 17:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:26.705 17:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:13:26.962 BaseBdev4 00:13:26.962 17:32:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:13:26.962 17:32:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:13:26.962 17:32:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:26.962 17:32:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:26.962 17:32:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:26.962 17:32:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:26.962 17:32:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:27.219 17:32:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:27.219 [ 00:13:27.219 { 00:13:27.219 "name": "BaseBdev4", 00:13:27.219 "aliases": [ 00:13:27.219 "31ab1b09-42d0-11ef-96ac-773515fba644" 00:13:27.219 ], 00:13:27.219 "product_name": "Malloc disk", 00:13:27.219 "block_size": 512, 00:13:27.219 "num_blocks": 65536, 00:13:27.219 "uuid": "31ab1b09-42d0-11ef-96ac-773515fba644", 00:13:27.219 "assigned_rate_limits": { 00:13:27.219 "rw_ios_per_sec": 0, 00:13:27.219 "rw_mbytes_per_sec": 0, 00:13:27.219 "r_mbytes_per_sec": 0, 00:13:27.219 "w_mbytes_per_sec": 0 00:13:27.219 }, 00:13:27.219 "claimed": false, 00:13:27.219 "zoned": false, 00:13:27.219 "supported_io_types": { 00:13:27.219 "read": true, 00:13:27.219 "write": true, 00:13:27.219 "unmap": true, 00:13:27.219 "flush": true, 00:13:27.219 "reset": true, 00:13:27.219 "nvme_admin": false, 00:13:27.219 "nvme_io": false, 00:13:27.219 "nvme_io_md": false, 00:13:27.219 "write_zeroes": true, 00:13:27.219 "zcopy": true, 00:13:27.219 "get_zone_info": false, 00:13:27.219 "zone_management": false, 00:13:27.219 "zone_append": false, 00:13:27.219 "compare": false, 00:13:27.219 "compare_and_write": false, 00:13:27.219 "abort": true, 00:13:27.219 "seek_hole": false, 00:13:27.219 "seek_data": false, 00:13:27.219 "copy": true, 00:13:27.219 "nvme_iov_md": false 00:13:27.219 }, 00:13:27.219 "memory_domains": [ 00:13:27.219 { 00:13:27.219 "dma_device_id": "system", 00:13:27.219 "dma_device_type": 1 00:13:27.219 }, 00:13:27.219 { 00:13:27.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.219 "dma_device_type": 2 00:13:27.219 } 00:13:27.219 ], 00:13:27.219 "driver_specific": {} 00:13:27.219 } 00:13:27.219 ] 00:13:27.476 17:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:27.476 17:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:27.476 17:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:27.476 17:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:27.476 [2024-07-15 17:32:23.298292] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:27.476 [2024-07-15 17:32:23.298347] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:27.476 [2024-07-15 17:32:23.298357] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:27.476 [2024-07-15 17:32:23.298942] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:27.476 [2024-07-15 17:32:23.298960] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:27.734 17:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:27.734 17:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:27.734 17:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:27.734 17:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:27.734 17:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:27.734 17:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:27.734 17:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:27.734 17:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:27.734 17:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:27.734 17:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:27.734 17:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.734 17:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:27.734 17:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:27.734 "name": "Existed_Raid", 00:13:27.734 "uuid": "321a313a-42d0-11ef-96ac-773515fba644", 00:13:27.734 "strip_size_kb": 64, 00:13:27.734 "state": "configuring", 00:13:27.734 "raid_level": "raid0", 00:13:27.734 "superblock": true, 00:13:27.734 "num_base_bdevs": 4, 00:13:27.734 "num_base_bdevs_discovered": 3, 00:13:27.734 "num_base_bdevs_operational": 4, 00:13:27.734 "base_bdevs_list": [ 00:13:27.734 { 00:13:27.734 "name": "BaseBdev1", 00:13:27.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.734 "is_configured": false, 00:13:27.734 "data_offset": 0, 00:13:27.734 "data_size": 0 00:13:27.734 }, 00:13:27.734 { 00:13:27.734 "name": "BaseBdev2", 00:13:27.734 "uuid": "30b8ca45-42d0-11ef-96ac-773515fba644", 00:13:27.734 "is_configured": true, 00:13:27.734 "data_offset": 2048, 00:13:27.734 "data_size": 63488 00:13:27.734 }, 00:13:27.734 { 00:13:27.734 "name": "BaseBdev3", 00:13:27.734 "uuid": "3134b06a-42d0-11ef-96ac-773515fba644", 00:13:27.734 "is_configured": true, 00:13:27.734 "data_offset": 2048, 00:13:27.734 "data_size": 63488 00:13:27.734 }, 00:13:27.734 { 00:13:27.734 "name": "BaseBdev4", 00:13:27.734 "uuid": "31ab1b09-42d0-11ef-96ac-773515fba644", 00:13:27.734 "is_configured": true, 00:13:27.734 "data_offset": 2048, 00:13:27.734 "data_size": 63488 00:13:27.734 } 00:13:27.734 ] 00:13:27.734 }' 00:13:27.734 17:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:27.734 17:32:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.330 17:32:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:13:28.330 [2024-07-15 17:32:24.130337] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:28.330 17:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:28.330 17:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:28.330 17:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:28.330 17:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:28.330 17:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:28.330 17:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:28.330 17:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:28.330 17:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:28.330 17:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:28.330 17:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:28.330 17:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:28.605 17:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.605 17:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:28.605 "name": "Existed_Raid", 00:13:28.605 "uuid": "321a313a-42d0-11ef-96ac-773515fba644", 00:13:28.605 "strip_size_kb": 64, 00:13:28.605 "state": "configuring", 00:13:28.605 "raid_level": "raid0", 00:13:28.605 "superblock": true, 00:13:28.605 "num_base_bdevs": 4, 00:13:28.605 "num_base_bdevs_discovered": 2, 00:13:28.605 "num_base_bdevs_operational": 4, 00:13:28.605 "base_bdevs_list": [ 00:13:28.605 { 00:13:28.605 "name": "BaseBdev1", 00:13:28.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.605 "is_configured": false, 00:13:28.605 "data_offset": 0, 00:13:28.605 "data_size": 0 00:13:28.605 }, 00:13:28.605 { 00:13:28.605 "name": null, 00:13:28.605 "uuid": "30b8ca45-42d0-11ef-96ac-773515fba644", 00:13:28.605 "is_configured": false, 00:13:28.605 "data_offset": 2048, 00:13:28.605 "data_size": 63488 00:13:28.605 }, 00:13:28.605 { 00:13:28.605 "name": "BaseBdev3", 00:13:28.605 "uuid": "3134b06a-42d0-11ef-96ac-773515fba644", 00:13:28.605 "is_configured": true, 00:13:28.605 "data_offset": 2048, 00:13:28.605 "data_size": 63488 00:13:28.605 }, 00:13:28.605 { 00:13:28.605 "name": "BaseBdev4", 00:13:28.605 "uuid": "31ab1b09-42d0-11ef-96ac-773515fba644", 00:13:28.605 "is_configured": true, 00:13:28.605 "data_offset": 2048, 00:13:28.605 "data_size": 63488 00:13:28.605 } 00:13:28.605 ] 00:13:28.605 }' 00:13:28.605 17:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:28.605 17:32:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.171 17:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:29.171 17:32:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:29.429 17:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:13:29.429 17:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:29.429 [2024-07-15 17:32:25.226491] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:29.429 BaseBdev1 00:13:29.429 17:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:13:29.429 17:32:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:29.429 17:32:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:29.429 17:32:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:29.429 17:32:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:29.429 17:32:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:29.429 17:32:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:29.688 17:32:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:29.946 [ 00:13:29.946 { 00:13:29.946 "name": "BaseBdev1", 00:13:29.946 "aliases": [ 00:13:29.946 "33406547-42d0-11ef-96ac-773515fba644" 00:13:29.946 ], 00:13:29.946 "product_name": "Malloc disk", 00:13:29.946 "block_size": 512, 00:13:29.946 "num_blocks": 65536, 00:13:29.946 "uuid": "33406547-42d0-11ef-96ac-773515fba644", 00:13:29.946 "assigned_rate_limits": { 00:13:29.946 "rw_ios_per_sec": 0, 00:13:29.946 "rw_mbytes_per_sec": 0, 00:13:29.946 "r_mbytes_per_sec": 0, 00:13:29.946 "w_mbytes_per_sec": 0 00:13:29.946 }, 00:13:29.946 "claimed": true, 00:13:29.946 "claim_type": "exclusive_write", 00:13:29.946 "zoned": false, 00:13:29.946 "supported_io_types": { 00:13:29.946 "read": true, 00:13:29.946 "write": true, 00:13:29.946 "unmap": true, 00:13:29.946 "flush": true, 00:13:29.946 "reset": true, 00:13:29.946 "nvme_admin": false, 00:13:29.946 "nvme_io": false, 00:13:29.946 "nvme_io_md": false, 00:13:29.946 "write_zeroes": true, 00:13:29.946 "zcopy": true, 00:13:29.946 "get_zone_info": false, 00:13:29.946 "zone_management": false, 00:13:29.946 "zone_append": false, 00:13:29.946 "compare": false, 00:13:29.946 "compare_and_write": false, 00:13:29.946 "abort": true, 00:13:29.946 "seek_hole": false, 00:13:29.946 "seek_data": false, 00:13:29.946 "copy": true, 00:13:29.946 "nvme_iov_md": false 00:13:29.946 }, 00:13:29.946 "memory_domains": [ 00:13:29.946 { 00:13:29.946 "dma_device_id": "system", 00:13:29.946 "dma_device_type": 1 00:13:29.946 }, 00:13:29.946 { 00:13:29.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.946 "dma_device_type": 2 00:13:29.946 } 00:13:29.946 ], 00:13:29.946 "driver_specific": {} 00:13:29.946 } 00:13:29.946 ] 00:13:29.946 17:32:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:29.946 17:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:29.946 17:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:29.946 17:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:29.946 17:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:29.946 17:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:29.946 17:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:29.946 17:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:29.946 17:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:29.946 17:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:29.946 17:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:29.946 17:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:29.946 17:32:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.204 17:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:30.204 "name": "Existed_Raid", 00:13:30.204 "uuid": "321a313a-42d0-11ef-96ac-773515fba644", 00:13:30.204 "strip_size_kb": 64, 00:13:30.204 "state": "configuring", 00:13:30.204 "raid_level": "raid0", 00:13:30.204 "superblock": true, 00:13:30.204 "num_base_bdevs": 4, 00:13:30.204 "num_base_bdevs_discovered": 3, 00:13:30.204 "num_base_bdevs_operational": 4, 00:13:30.204 "base_bdevs_list": [ 00:13:30.204 { 00:13:30.204 "name": "BaseBdev1", 00:13:30.204 "uuid": "33406547-42d0-11ef-96ac-773515fba644", 00:13:30.204 "is_configured": true, 00:13:30.204 "data_offset": 2048, 00:13:30.204 "data_size": 63488 00:13:30.204 }, 00:13:30.204 { 00:13:30.204 "name": null, 00:13:30.204 "uuid": "30b8ca45-42d0-11ef-96ac-773515fba644", 00:13:30.204 "is_configured": false, 00:13:30.204 "data_offset": 2048, 00:13:30.204 "data_size": 63488 00:13:30.204 }, 00:13:30.204 { 00:13:30.204 "name": "BaseBdev3", 00:13:30.204 "uuid": "3134b06a-42d0-11ef-96ac-773515fba644", 00:13:30.204 "is_configured": true, 00:13:30.204 "data_offset": 2048, 00:13:30.204 "data_size": 63488 00:13:30.204 }, 00:13:30.204 { 00:13:30.204 "name": "BaseBdev4", 00:13:30.204 "uuid": "31ab1b09-42d0-11ef-96ac-773515fba644", 00:13:30.204 "is_configured": true, 00:13:30.204 "data_offset": 2048, 00:13:30.204 "data_size": 63488 00:13:30.204 } 00:13:30.205 ] 00:13:30.205 }' 00:13:30.205 17:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:30.205 17:32:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.769 17:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:30.769 17:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:30.769 17:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:13:30.769 17:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:13:31.028 [2024-07-15 17:32:26.830397] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:31.028 17:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:31.028 17:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:31.028 17:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:31.028 17:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:31.028 17:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:31.028 17:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:31.028 17:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:31.028 17:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:31.028 17:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:31.028 17:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:31.028 17:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:31.028 17:32:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.286 17:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:31.286 "name": "Existed_Raid", 00:13:31.286 "uuid": "321a313a-42d0-11ef-96ac-773515fba644", 00:13:31.286 "strip_size_kb": 64, 00:13:31.286 "state": "configuring", 00:13:31.286 "raid_level": "raid0", 00:13:31.286 "superblock": true, 00:13:31.286 "num_base_bdevs": 4, 00:13:31.286 "num_base_bdevs_discovered": 2, 00:13:31.286 "num_base_bdevs_operational": 4, 00:13:31.286 "base_bdevs_list": [ 00:13:31.286 { 00:13:31.286 "name": "BaseBdev1", 00:13:31.286 "uuid": "33406547-42d0-11ef-96ac-773515fba644", 00:13:31.286 "is_configured": true, 00:13:31.286 "data_offset": 2048, 00:13:31.286 "data_size": 63488 00:13:31.286 }, 00:13:31.286 { 00:13:31.286 "name": null, 00:13:31.286 "uuid": "30b8ca45-42d0-11ef-96ac-773515fba644", 00:13:31.286 "is_configured": false, 00:13:31.286 "data_offset": 2048, 00:13:31.286 "data_size": 63488 00:13:31.286 }, 00:13:31.286 { 00:13:31.286 "name": null, 00:13:31.286 "uuid": "3134b06a-42d0-11ef-96ac-773515fba644", 00:13:31.286 "is_configured": false, 00:13:31.286 "data_offset": 2048, 00:13:31.286 "data_size": 63488 00:13:31.286 }, 00:13:31.286 { 00:13:31.286 "name": "BaseBdev4", 00:13:31.286 "uuid": "31ab1b09-42d0-11ef-96ac-773515fba644", 00:13:31.286 "is_configured": true, 00:13:31.286 "data_offset": 2048, 00:13:31.286 "data_size": 63488 00:13:31.286 } 00:13:31.286 ] 00:13:31.286 }' 00:13:31.286 17:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:31.286 17:32:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.853 17:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:31.853 17:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:31.853 17:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:13:31.853 17:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:32.112 [2024-07-15 17:32:27.942427] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:32.371 17:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:32.371 17:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:32.371 17:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:32.371 17:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:32.371 17:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:32.371 17:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:32.371 17:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:32.371 17:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:32.371 17:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:32.371 17:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:32.371 17:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:32.371 17:32:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:32.628 17:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:32.628 "name": "Existed_Raid", 00:13:32.628 "uuid": "321a313a-42d0-11ef-96ac-773515fba644", 00:13:32.628 "strip_size_kb": 64, 00:13:32.628 "state": "configuring", 00:13:32.628 "raid_level": "raid0", 00:13:32.628 "superblock": true, 00:13:32.628 "num_base_bdevs": 4, 00:13:32.628 "num_base_bdevs_discovered": 3, 00:13:32.628 "num_base_bdevs_operational": 4, 00:13:32.628 "base_bdevs_list": [ 00:13:32.628 { 00:13:32.628 "name": "BaseBdev1", 00:13:32.628 "uuid": "33406547-42d0-11ef-96ac-773515fba644", 00:13:32.628 "is_configured": true, 00:13:32.628 "data_offset": 2048, 00:13:32.628 "data_size": 63488 00:13:32.628 }, 00:13:32.628 { 00:13:32.628 "name": null, 00:13:32.628 "uuid": "30b8ca45-42d0-11ef-96ac-773515fba644", 00:13:32.628 "is_configured": false, 00:13:32.628 "data_offset": 2048, 00:13:32.628 "data_size": 63488 00:13:32.628 }, 00:13:32.628 { 00:13:32.628 "name": "BaseBdev3", 00:13:32.628 "uuid": "3134b06a-42d0-11ef-96ac-773515fba644", 00:13:32.628 "is_configured": true, 00:13:32.628 "data_offset": 2048, 00:13:32.628 "data_size": 63488 00:13:32.628 }, 00:13:32.628 { 00:13:32.628 "name": "BaseBdev4", 00:13:32.628 "uuid": "31ab1b09-42d0-11ef-96ac-773515fba644", 00:13:32.628 "is_configured": true, 00:13:32.628 "data_offset": 2048, 00:13:32.628 "data_size": 63488 00:13:32.628 } 00:13:32.628 ] 00:13:32.628 }' 00:13:32.628 17:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:32.628 17:32:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.887 17:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:32.887 17:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:33.146 17:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:13:33.146 17:32:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:33.404 [2024-07-15 17:32:29.086454] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:33.404 17:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:33.404 17:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:33.404 17:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:33.404 17:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:33.404 17:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:33.404 17:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:33.404 17:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:33.404 17:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:33.404 17:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:33.404 17:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:33.404 17:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:33.404 17:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.663 17:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:33.663 "name": "Existed_Raid", 00:13:33.663 "uuid": "321a313a-42d0-11ef-96ac-773515fba644", 00:13:33.663 "strip_size_kb": 64, 00:13:33.663 "state": "configuring", 00:13:33.663 "raid_level": "raid0", 00:13:33.663 "superblock": true, 00:13:33.663 "num_base_bdevs": 4, 00:13:33.663 "num_base_bdevs_discovered": 2, 00:13:33.663 "num_base_bdevs_operational": 4, 00:13:33.663 "base_bdevs_list": [ 00:13:33.663 { 00:13:33.663 "name": null, 00:13:33.663 "uuid": "33406547-42d0-11ef-96ac-773515fba644", 00:13:33.663 "is_configured": false, 00:13:33.663 "data_offset": 2048, 00:13:33.663 "data_size": 63488 00:13:33.663 }, 00:13:33.663 { 00:13:33.663 "name": null, 00:13:33.663 "uuid": "30b8ca45-42d0-11ef-96ac-773515fba644", 00:13:33.663 "is_configured": false, 00:13:33.663 "data_offset": 2048, 00:13:33.663 "data_size": 63488 00:13:33.663 }, 00:13:33.663 { 00:13:33.663 "name": "BaseBdev3", 00:13:33.663 "uuid": "3134b06a-42d0-11ef-96ac-773515fba644", 00:13:33.663 "is_configured": true, 00:13:33.663 "data_offset": 2048, 00:13:33.663 "data_size": 63488 00:13:33.663 }, 00:13:33.663 { 00:13:33.663 "name": "BaseBdev4", 00:13:33.663 "uuid": "31ab1b09-42d0-11ef-96ac-773515fba644", 00:13:33.663 "is_configured": true, 00:13:33.663 "data_offset": 2048, 00:13:33.663 "data_size": 63488 00:13:33.663 } 00:13:33.663 ] 00:13:33.663 }' 00:13:33.663 17:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:33.663 17:32:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.922 17:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:33.922 17:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:34.180 17:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:13:34.180 17:32:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:34.438 [2024-07-15 17:32:30.144486] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:34.438 17:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:34.438 17:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:34.438 17:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:34.438 17:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:34.438 17:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:34.438 17:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:34.438 17:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:34.438 17:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:34.438 17:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:34.438 17:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:34.438 17:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:34.438 17:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.697 17:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:34.697 "name": "Existed_Raid", 00:13:34.697 "uuid": "321a313a-42d0-11ef-96ac-773515fba644", 00:13:34.697 "strip_size_kb": 64, 00:13:34.697 "state": "configuring", 00:13:34.697 "raid_level": "raid0", 00:13:34.697 "superblock": true, 00:13:34.697 "num_base_bdevs": 4, 00:13:34.697 "num_base_bdevs_discovered": 3, 00:13:34.697 "num_base_bdevs_operational": 4, 00:13:34.697 "base_bdevs_list": [ 00:13:34.697 { 00:13:34.697 "name": null, 00:13:34.697 "uuid": "33406547-42d0-11ef-96ac-773515fba644", 00:13:34.697 "is_configured": false, 00:13:34.697 "data_offset": 2048, 00:13:34.697 "data_size": 63488 00:13:34.697 }, 00:13:34.697 { 00:13:34.697 "name": "BaseBdev2", 00:13:34.697 "uuid": "30b8ca45-42d0-11ef-96ac-773515fba644", 00:13:34.697 "is_configured": true, 00:13:34.697 "data_offset": 2048, 00:13:34.697 "data_size": 63488 00:13:34.698 }, 00:13:34.698 { 00:13:34.698 "name": "BaseBdev3", 00:13:34.698 "uuid": "3134b06a-42d0-11ef-96ac-773515fba644", 00:13:34.698 "is_configured": true, 00:13:34.698 "data_offset": 2048, 00:13:34.698 "data_size": 63488 00:13:34.698 }, 00:13:34.698 { 00:13:34.698 "name": "BaseBdev4", 00:13:34.698 "uuid": "31ab1b09-42d0-11ef-96ac-773515fba644", 00:13:34.698 "is_configured": true, 00:13:34.698 "data_offset": 2048, 00:13:34.698 "data_size": 63488 00:13:34.698 } 00:13:34.698 ] 00:13:34.698 }' 00:13:34.698 17:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:34.698 17:32:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.957 17:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:34.957 17:32:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:35.524 17:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:13:35.524 17:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:35.524 17:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:35.524 17:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 33406547-42d0-11ef-96ac-773515fba644 00:13:35.782 [2024-07-15 17:32:31.524633] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:35.782 [2024-07-15 17:32:31.524682] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x17df59834f00 00:13:35.782 [2024-07-15 17:32:31.524687] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:35.782 [2024-07-15 17:32:31.524717] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x17df59897e20 00:13:35.782 [2024-07-15 17:32:31.524765] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x17df59834f00 00:13:35.782 [2024-07-15 17:32:31.524769] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x17df59834f00 00:13:35.782 [2024-07-15 17:32:31.524789] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.782 NewBaseBdev 00:13:35.782 17:32:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:13:35.782 17:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:13:35.782 17:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:35.782 17:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:35.782 17:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:35.782 17:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:35.782 17:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:36.039 17:32:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:36.297 [ 00:13:36.297 { 00:13:36.297 "name": "NewBaseBdev", 00:13:36.297 "aliases": [ 00:13:36.297 "33406547-42d0-11ef-96ac-773515fba644" 00:13:36.297 ], 00:13:36.297 "product_name": "Malloc disk", 00:13:36.297 "block_size": 512, 00:13:36.298 "num_blocks": 65536, 00:13:36.298 "uuid": "33406547-42d0-11ef-96ac-773515fba644", 00:13:36.298 "assigned_rate_limits": { 00:13:36.298 "rw_ios_per_sec": 0, 00:13:36.298 "rw_mbytes_per_sec": 0, 00:13:36.298 "r_mbytes_per_sec": 0, 00:13:36.298 "w_mbytes_per_sec": 0 00:13:36.298 }, 00:13:36.298 "claimed": true, 00:13:36.298 "claim_type": "exclusive_write", 00:13:36.298 "zoned": false, 00:13:36.298 "supported_io_types": { 00:13:36.298 "read": true, 00:13:36.298 "write": true, 00:13:36.298 "unmap": true, 00:13:36.298 "flush": true, 00:13:36.298 "reset": true, 00:13:36.298 "nvme_admin": false, 00:13:36.298 "nvme_io": false, 00:13:36.298 "nvme_io_md": false, 00:13:36.298 "write_zeroes": true, 00:13:36.298 "zcopy": true, 00:13:36.298 "get_zone_info": false, 00:13:36.298 "zone_management": false, 00:13:36.298 "zone_append": false, 00:13:36.298 "compare": false, 00:13:36.298 "compare_and_write": false, 00:13:36.298 "abort": true, 00:13:36.298 "seek_hole": false, 00:13:36.298 "seek_data": false, 00:13:36.298 "copy": true, 00:13:36.298 "nvme_iov_md": false 00:13:36.298 }, 00:13:36.298 "memory_domains": [ 00:13:36.298 { 00:13:36.298 "dma_device_id": "system", 00:13:36.298 "dma_device_type": 1 00:13:36.298 }, 00:13:36.298 { 00:13:36.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.298 "dma_device_type": 2 00:13:36.298 } 00:13:36.298 ], 00:13:36.298 "driver_specific": {} 00:13:36.298 } 00:13:36.298 ] 00:13:36.298 17:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:36.298 17:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:36.298 17:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:36.298 17:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:36.298 17:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:36.298 17:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:36.298 17:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:36.298 17:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:36.298 17:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:36.298 17:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:36.298 17:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:36.298 17:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:36.298 17:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.556 17:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:36.556 "name": "Existed_Raid", 00:13:36.556 "uuid": "321a313a-42d0-11ef-96ac-773515fba644", 00:13:36.556 "strip_size_kb": 64, 00:13:36.556 "state": "online", 00:13:36.556 "raid_level": "raid0", 00:13:36.557 "superblock": true, 00:13:36.557 "num_base_bdevs": 4, 00:13:36.557 "num_base_bdevs_discovered": 4, 00:13:36.557 "num_base_bdevs_operational": 4, 00:13:36.557 "base_bdevs_list": [ 00:13:36.557 { 00:13:36.557 "name": "NewBaseBdev", 00:13:36.557 "uuid": "33406547-42d0-11ef-96ac-773515fba644", 00:13:36.557 "is_configured": true, 00:13:36.557 "data_offset": 2048, 00:13:36.557 "data_size": 63488 00:13:36.557 }, 00:13:36.557 { 00:13:36.557 "name": "BaseBdev2", 00:13:36.557 "uuid": "30b8ca45-42d0-11ef-96ac-773515fba644", 00:13:36.557 "is_configured": true, 00:13:36.557 "data_offset": 2048, 00:13:36.557 "data_size": 63488 00:13:36.557 }, 00:13:36.557 { 00:13:36.557 "name": "BaseBdev3", 00:13:36.557 "uuid": "3134b06a-42d0-11ef-96ac-773515fba644", 00:13:36.557 "is_configured": true, 00:13:36.557 "data_offset": 2048, 00:13:36.557 "data_size": 63488 00:13:36.557 }, 00:13:36.557 { 00:13:36.557 "name": "BaseBdev4", 00:13:36.557 "uuid": "31ab1b09-42d0-11ef-96ac-773515fba644", 00:13:36.557 "is_configured": true, 00:13:36.557 "data_offset": 2048, 00:13:36.557 "data_size": 63488 00:13:36.557 } 00:13:36.557 ] 00:13:36.557 }' 00:13:36.557 17:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:36.557 17:32:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.815 17:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:13:36.815 17:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:36.815 17:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:36.815 17:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:36.815 17:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:36.815 17:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:13:36.815 17:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:36.815 17:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:37.382 [2024-07-15 17:32:32.908583] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:37.382 17:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:37.382 "name": "Existed_Raid", 00:13:37.382 "aliases": [ 00:13:37.382 "321a313a-42d0-11ef-96ac-773515fba644" 00:13:37.382 ], 00:13:37.382 "product_name": "Raid Volume", 00:13:37.382 "block_size": 512, 00:13:37.382 "num_blocks": 253952, 00:13:37.382 "uuid": "321a313a-42d0-11ef-96ac-773515fba644", 00:13:37.382 "assigned_rate_limits": { 00:13:37.382 "rw_ios_per_sec": 0, 00:13:37.382 "rw_mbytes_per_sec": 0, 00:13:37.382 "r_mbytes_per_sec": 0, 00:13:37.382 "w_mbytes_per_sec": 0 00:13:37.382 }, 00:13:37.382 "claimed": false, 00:13:37.382 "zoned": false, 00:13:37.382 "supported_io_types": { 00:13:37.382 "read": true, 00:13:37.382 "write": true, 00:13:37.382 "unmap": true, 00:13:37.382 "flush": true, 00:13:37.382 "reset": true, 00:13:37.382 "nvme_admin": false, 00:13:37.382 "nvme_io": false, 00:13:37.382 "nvme_io_md": false, 00:13:37.382 "write_zeroes": true, 00:13:37.382 "zcopy": false, 00:13:37.382 "get_zone_info": false, 00:13:37.382 "zone_management": false, 00:13:37.382 "zone_append": false, 00:13:37.382 "compare": false, 00:13:37.382 "compare_and_write": false, 00:13:37.382 "abort": false, 00:13:37.382 "seek_hole": false, 00:13:37.382 "seek_data": false, 00:13:37.382 "copy": false, 00:13:37.382 "nvme_iov_md": false 00:13:37.382 }, 00:13:37.382 "memory_domains": [ 00:13:37.382 { 00:13:37.382 "dma_device_id": "system", 00:13:37.382 "dma_device_type": 1 00:13:37.382 }, 00:13:37.382 { 00:13:37.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.382 "dma_device_type": 2 00:13:37.382 }, 00:13:37.382 { 00:13:37.382 "dma_device_id": "system", 00:13:37.382 "dma_device_type": 1 00:13:37.382 }, 00:13:37.382 { 00:13:37.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.382 "dma_device_type": 2 00:13:37.382 }, 00:13:37.382 { 00:13:37.382 "dma_device_id": "system", 00:13:37.382 "dma_device_type": 1 00:13:37.382 }, 00:13:37.382 { 00:13:37.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.382 "dma_device_type": 2 00:13:37.382 }, 00:13:37.382 { 00:13:37.382 "dma_device_id": "system", 00:13:37.382 "dma_device_type": 1 00:13:37.382 }, 00:13:37.382 { 00:13:37.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.382 "dma_device_type": 2 00:13:37.382 } 00:13:37.382 ], 00:13:37.382 "driver_specific": { 00:13:37.382 "raid": { 00:13:37.383 "uuid": "321a313a-42d0-11ef-96ac-773515fba644", 00:13:37.383 "strip_size_kb": 64, 00:13:37.383 "state": "online", 00:13:37.383 "raid_level": "raid0", 00:13:37.383 "superblock": true, 00:13:37.383 "num_base_bdevs": 4, 00:13:37.383 "num_base_bdevs_discovered": 4, 00:13:37.383 "num_base_bdevs_operational": 4, 00:13:37.383 "base_bdevs_list": [ 00:13:37.383 { 00:13:37.383 "name": "NewBaseBdev", 00:13:37.383 "uuid": "33406547-42d0-11ef-96ac-773515fba644", 00:13:37.383 "is_configured": true, 00:13:37.383 "data_offset": 2048, 00:13:37.383 "data_size": 63488 00:13:37.383 }, 00:13:37.383 { 00:13:37.383 "name": "BaseBdev2", 00:13:37.383 "uuid": "30b8ca45-42d0-11ef-96ac-773515fba644", 00:13:37.383 "is_configured": true, 00:13:37.383 "data_offset": 2048, 00:13:37.383 "data_size": 63488 00:13:37.383 }, 00:13:37.383 { 00:13:37.383 "name": "BaseBdev3", 00:13:37.383 "uuid": "3134b06a-42d0-11ef-96ac-773515fba644", 00:13:37.383 "is_configured": true, 00:13:37.383 "data_offset": 2048, 00:13:37.383 "data_size": 63488 00:13:37.383 }, 00:13:37.383 { 00:13:37.383 "name": "BaseBdev4", 00:13:37.383 "uuid": "31ab1b09-42d0-11ef-96ac-773515fba644", 00:13:37.383 "is_configured": true, 00:13:37.383 "data_offset": 2048, 00:13:37.383 "data_size": 63488 00:13:37.383 } 00:13:37.383 ] 00:13:37.383 } 00:13:37.383 } 00:13:37.383 }' 00:13:37.383 17:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:37.383 17:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:13:37.383 BaseBdev2 00:13:37.383 BaseBdev3 00:13:37.383 BaseBdev4' 00:13:37.383 17:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:37.383 17:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:13:37.383 17:32:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:37.641 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:37.642 "name": "NewBaseBdev", 00:13:37.642 "aliases": [ 00:13:37.642 "33406547-42d0-11ef-96ac-773515fba644" 00:13:37.642 ], 00:13:37.642 "product_name": "Malloc disk", 00:13:37.642 "block_size": 512, 00:13:37.642 "num_blocks": 65536, 00:13:37.642 "uuid": "33406547-42d0-11ef-96ac-773515fba644", 00:13:37.642 "assigned_rate_limits": { 00:13:37.642 "rw_ios_per_sec": 0, 00:13:37.642 "rw_mbytes_per_sec": 0, 00:13:37.642 "r_mbytes_per_sec": 0, 00:13:37.642 "w_mbytes_per_sec": 0 00:13:37.642 }, 00:13:37.642 "claimed": true, 00:13:37.642 "claim_type": "exclusive_write", 00:13:37.642 "zoned": false, 00:13:37.642 "supported_io_types": { 00:13:37.642 "read": true, 00:13:37.642 "write": true, 00:13:37.642 "unmap": true, 00:13:37.642 "flush": true, 00:13:37.642 "reset": true, 00:13:37.642 "nvme_admin": false, 00:13:37.642 "nvme_io": false, 00:13:37.642 "nvme_io_md": false, 00:13:37.642 "write_zeroes": true, 00:13:37.642 "zcopy": true, 00:13:37.642 "get_zone_info": false, 00:13:37.642 "zone_management": false, 00:13:37.642 "zone_append": false, 00:13:37.642 "compare": false, 00:13:37.642 "compare_and_write": false, 00:13:37.642 "abort": true, 00:13:37.642 "seek_hole": false, 00:13:37.642 "seek_data": false, 00:13:37.642 "copy": true, 00:13:37.642 "nvme_iov_md": false 00:13:37.642 }, 00:13:37.642 "memory_domains": [ 00:13:37.642 { 00:13:37.642 "dma_device_id": "system", 00:13:37.642 "dma_device_type": 1 00:13:37.642 }, 00:13:37.642 { 00:13:37.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.642 "dma_device_type": 2 00:13:37.642 } 00:13:37.642 ], 00:13:37.642 "driver_specific": {} 00:13:37.642 }' 00:13:37.642 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:37.642 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:37.642 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:37.642 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:37.642 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:37.642 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:37.642 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:37.642 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:37.642 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:37.642 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:37.642 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:37.642 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:37.642 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:37.642 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:37.642 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:37.900 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:37.900 "name": "BaseBdev2", 00:13:37.900 "aliases": [ 00:13:37.900 "30b8ca45-42d0-11ef-96ac-773515fba644" 00:13:37.900 ], 00:13:37.900 "product_name": "Malloc disk", 00:13:37.900 "block_size": 512, 00:13:37.900 "num_blocks": 65536, 00:13:37.900 "uuid": "30b8ca45-42d0-11ef-96ac-773515fba644", 00:13:37.900 "assigned_rate_limits": { 00:13:37.900 "rw_ios_per_sec": 0, 00:13:37.900 "rw_mbytes_per_sec": 0, 00:13:37.900 "r_mbytes_per_sec": 0, 00:13:37.900 "w_mbytes_per_sec": 0 00:13:37.900 }, 00:13:37.900 "claimed": true, 00:13:37.900 "claim_type": "exclusive_write", 00:13:37.900 "zoned": false, 00:13:37.900 "supported_io_types": { 00:13:37.900 "read": true, 00:13:37.900 "write": true, 00:13:37.900 "unmap": true, 00:13:37.900 "flush": true, 00:13:37.900 "reset": true, 00:13:37.900 "nvme_admin": false, 00:13:37.900 "nvme_io": false, 00:13:37.900 "nvme_io_md": false, 00:13:37.900 "write_zeroes": true, 00:13:37.900 "zcopy": true, 00:13:37.900 "get_zone_info": false, 00:13:37.900 "zone_management": false, 00:13:37.900 "zone_append": false, 00:13:37.900 "compare": false, 00:13:37.900 "compare_and_write": false, 00:13:37.900 "abort": true, 00:13:37.900 "seek_hole": false, 00:13:37.900 "seek_data": false, 00:13:37.900 "copy": true, 00:13:37.900 "nvme_iov_md": false 00:13:37.900 }, 00:13:37.900 "memory_domains": [ 00:13:37.900 { 00:13:37.900 "dma_device_id": "system", 00:13:37.900 "dma_device_type": 1 00:13:37.900 }, 00:13:37.900 { 00:13:37.900 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.900 "dma_device_type": 2 00:13:37.900 } 00:13:37.900 ], 00:13:37.900 "driver_specific": {} 00:13:37.900 }' 00:13:37.900 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:37.900 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:37.900 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:37.900 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:37.900 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:37.900 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:37.900 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:37.900 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:37.900 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:37.900 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:37.900 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:37.900 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:37.900 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:37.900 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:37.900 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:38.157 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:38.157 "name": "BaseBdev3", 00:13:38.157 "aliases": [ 00:13:38.157 "3134b06a-42d0-11ef-96ac-773515fba644" 00:13:38.157 ], 00:13:38.157 "product_name": "Malloc disk", 00:13:38.157 "block_size": 512, 00:13:38.157 "num_blocks": 65536, 00:13:38.157 "uuid": "3134b06a-42d0-11ef-96ac-773515fba644", 00:13:38.157 "assigned_rate_limits": { 00:13:38.157 "rw_ios_per_sec": 0, 00:13:38.157 "rw_mbytes_per_sec": 0, 00:13:38.157 "r_mbytes_per_sec": 0, 00:13:38.157 "w_mbytes_per_sec": 0 00:13:38.157 }, 00:13:38.157 "claimed": true, 00:13:38.157 "claim_type": "exclusive_write", 00:13:38.157 "zoned": false, 00:13:38.157 "supported_io_types": { 00:13:38.157 "read": true, 00:13:38.157 "write": true, 00:13:38.157 "unmap": true, 00:13:38.157 "flush": true, 00:13:38.157 "reset": true, 00:13:38.157 "nvme_admin": false, 00:13:38.157 "nvme_io": false, 00:13:38.157 "nvme_io_md": false, 00:13:38.157 "write_zeroes": true, 00:13:38.157 "zcopy": true, 00:13:38.157 "get_zone_info": false, 00:13:38.157 "zone_management": false, 00:13:38.157 "zone_append": false, 00:13:38.157 "compare": false, 00:13:38.157 "compare_and_write": false, 00:13:38.157 "abort": true, 00:13:38.157 "seek_hole": false, 00:13:38.157 "seek_data": false, 00:13:38.157 "copy": true, 00:13:38.157 "nvme_iov_md": false 00:13:38.157 }, 00:13:38.157 "memory_domains": [ 00:13:38.157 { 00:13:38.157 "dma_device_id": "system", 00:13:38.157 "dma_device_type": 1 00:13:38.157 }, 00:13:38.157 { 00:13:38.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.157 "dma_device_type": 2 00:13:38.157 } 00:13:38.157 ], 00:13:38.157 "driver_specific": {} 00:13:38.157 }' 00:13:38.157 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:38.157 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:38.157 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:38.157 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:38.157 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:38.157 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:38.157 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:38.157 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:38.157 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:38.157 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:38.157 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:38.157 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:38.157 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:38.157 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:13:38.157 17:32:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:38.417 17:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:38.417 "name": "BaseBdev4", 00:13:38.417 "aliases": [ 00:13:38.417 "31ab1b09-42d0-11ef-96ac-773515fba644" 00:13:38.417 ], 00:13:38.417 "product_name": "Malloc disk", 00:13:38.417 "block_size": 512, 00:13:38.417 "num_blocks": 65536, 00:13:38.417 "uuid": "31ab1b09-42d0-11ef-96ac-773515fba644", 00:13:38.417 "assigned_rate_limits": { 00:13:38.417 "rw_ios_per_sec": 0, 00:13:38.417 "rw_mbytes_per_sec": 0, 00:13:38.417 "r_mbytes_per_sec": 0, 00:13:38.417 "w_mbytes_per_sec": 0 00:13:38.417 }, 00:13:38.417 "claimed": true, 00:13:38.417 "claim_type": "exclusive_write", 00:13:38.417 "zoned": false, 00:13:38.417 "supported_io_types": { 00:13:38.417 "read": true, 00:13:38.417 "write": true, 00:13:38.417 "unmap": true, 00:13:38.417 "flush": true, 00:13:38.417 "reset": true, 00:13:38.417 "nvme_admin": false, 00:13:38.417 "nvme_io": false, 00:13:38.417 "nvme_io_md": false, 00:13:38.417 "write_zeroes": true, 00:13:38.417 "zcopy": true, 00:13:38.417 "get_zone_info": false, 00:13:38.417 "zone_management": false, 00:13:38.417 "zone_append": false, 00:13:38.417 "compare": false, 00:13:38.417 "compare_and_write": false, 00:13:38.417 "abort": true, 00:13:38.417 "seek_hole": false, 00:13:38.417 "seek_data": false, 00:13:38.417 "copy": true, 00:13:38.417 "nvme_iov_md": false 00:13:38.417 }, 00:13:38.417 "memory_domains": [ 00:13:38.417 { 00:13:38.417 "dma_device_id": "system", 00:13:38.417 "dma_device_type": 1 00:13:38.417 }, 00:13:38.417 { 00:13:38.417 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.417 "dma_device_type": 2 00:13:38.417 } 00:13:38.417 ], 00:13:38.417 "driver_specific": {} 00:13:38.417 }' 00:13:38.417 17:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:38.676 17:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:38.676 17:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:38.676 17:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:38.676 17:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:38.676 17:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:38.676 17:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:38.676 17:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:38.676 17:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:38.676 17:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:38.676 17:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:38.676 17:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:38.676 17:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:38.935 [2024-07-15 17:32:34.548553] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:38.935 [2024-07-15 17:32:34.548577] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:38.935 [2024-07-15 17:32:34.548600] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:38.935 [2024-07-15 17:32:34.548615] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:38.935 [2024-07-15 17:32:34.548619] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17df59834f00 name Existed_Raid, state offline 00:13:38.935 17:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 59186 00:13:38.935 17:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 59186 ']' 00:13:38.935 17:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 59186 00:13:38.935 17:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:13:38.935 17:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:13:38.935 17:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 59186 00:13:38.935 17:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:13:38.935 17:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:13:38.935 17:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:13:38.935 killing process with pid 59186 00:13:38.935 17:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59186' 00:13:38.935 17:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 59186 00:13:38.935 [2024-07-15 17:32:34.574830] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:38.935 17:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 59186 00:13:38.935 [2024-07-15 17:32:34.597884] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:39.194 17:32:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:13:39.194 00:13:39.194 real 0m27.468s 00:13:39.194 user 0m50.555s 00:13:39.194 sys 0m3.505s 00:13:39.194 17:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:39.194 ************************************ 00:13:39.194 END TEST raid_state_function_test_sb 00:13:39.194 ************************************ 00:13:39.194 17:32:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.194 17:32:34 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:39.194 17:32:34 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:13:39.194 17:32:34 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:39.194 17:32:34 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:39.194 17:32:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:39.194 ************************************ 00:13:39.194 START TEST raid_superblock_test 00:13:39.194 ************************************ 00:13:39.194 17:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 4 00:13:39.194 17:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:13:39.194 17:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:13:39.194 17:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:13:39.194 17:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:13:39.194 17:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:13:39.194 17:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:13:39.194 17:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:13:39.194 17:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:13:39.194 17:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:13:39.194 17:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:13:39.194 17:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:13:39.194 17:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:13:39.194 17:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:13:39.194 17:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:13:39.194 17:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:13:39.194 17:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:13:39.194 17:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=60004 00:13:39.194 17:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 60004 /var/tmp/spdk-raid.sock 00:13:39.194 17:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 60004 ']' 00:13:39.194 17:32:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:13:39.194 17:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:39.194 17:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:39.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:39.194 17:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:39.194 17:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:39.194 17:32:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.194 [2024-07-15 17:32:34.829133] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:13:39.194 [2024-07-15 17:32:34.829272] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:13:39.762 EAL: TSC is not safe to use in SMP mode 00:13:39.762 EAL: TSC is not invariant 00:13:39.762 [2024-07-15 17:32:35.347372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.762 [2024-07-15 17:32:35.428434] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:39.762 [2024-07-15 17:32:35.430521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.762 [2024-07-15 17:32:35.431283] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:39.762 [2024-07-15 17:32:35.431296] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:40.020 17:32:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:40.020 17:32:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:13:40.020 17:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:13:40.020 17:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:40.020 17:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:13:40.020 17:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:13:40.020 17:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:40.020 17:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:40.020 17:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:40.020 17:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:40.020 17:32:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:13:40.277 malloc1 00:13:40.534 17:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:40.791 [2024-07-15 17:32:36.383021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:40.791 [2024-07-15 17:32:36.383095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:40.791 [2024-07-15 17:32:36.383108] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3249d0034780 00:13:40.791 [2024-07-15 17:32:36.383117] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:40.791 [2024-07-15 17:32:36.384028] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:40.791 [2024-07-15 17:32:36.384066] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:40.791 pt1 00:13:40.791 17:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:40.791 17:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:40.791 17:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:13:40.791 17:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:13:40.791 17:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:40.791 17:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:40.791 17:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:40.791 17:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:40.791 17:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:13:41.048 malloc2 00:13:41.048 17:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:41.306 [2024-07-15 17:32:36.947029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:41.306 [2024-07-15 17:32:36.947097] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.306 [2024-07-15 17:32:36.947111] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3249d0034c80 00:13:41.306 [2024-07-15 17:32:36.947119] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.306 [2024-07-15 17:32:36.947773] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.306 [2024-07-15 17:32:36.947801] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:41.306 pt2 00:13:41.306 17:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:41.306 17:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:41.306 17:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:13:41.306 17:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:13:41.306 17:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:41.306 17:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:41.306 17:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:41.306 17:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:41.306 17:32:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:13:41.563 malloc3 00:13:41.563 17:32:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:41.821 [2024-07-15 17:32:37.495037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:41.821 [2024-07-15 17:32:37.495094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.821 [2024-07-15 17:32:37.495107] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3249d0035180 00:13:41.821 [2024-07-15 17:32:37.495115] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.821 [2024-07-15 17:32:37.495773] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.821 [2024-07-15 17:32:37.495799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:41.821 pt3 00:13:41.821 17:32:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:41.821 17:32:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:41.821 17:32:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:13:41.821 17:32:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:13:41.821 17:32:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:41.821 17:32:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:41.821 17:32:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:41.821 17:32:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:41.821 17:32:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:13:42.078 malloc4 00:13:42.078 17:32:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:42.335 [2024-07-15 17:32:37.975043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:42.335 [2024-07-15 17:32:37.975103] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.335 [2024-07-15 17:32:37.975116] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3249d0035680 00:13:42.335 [2024-07-15 17:32:37.975125] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.335 [2024-07-15 17:32:37.975778] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.335 [2024-07-15 17:32:37.975805] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:42.335 pt4 00:13:42.335 17:32:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:42.335 17:32:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:42.335 17:32:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:13:42.593 [2024-07-15 17:32:38.263066] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:42.593 [2024-07-15 17:32:38.263653] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:42.593 [2024-07-15 17:32:38.263677] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:42.593 [2024-07-15 17:32:38.263689] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:42.593 [2024-07-15 17:32:38.263744] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3249d0035900 00:13:42.593 [2024-07-15 17:32:38.263751] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:42.593 [2024-07-15 17:32:38.263784] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3249d0097e20 00:13:42.593 [2024-07-15 17:32:38.263862] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3249d0035900 00:13:42.593 [2024-07-15 17:32:38.263867] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3249d0035900 00:13:42.593 [2024-07-15 17:32:38.263894] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.593 17:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:42.593 17:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:42.593 17:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:42.593 17:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:42.593 17:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:42.593 17:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:42.593 17:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:42.593 17:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:42.593 17:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:42.593 17:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:42.593 17:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:42.593 17:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.850 17:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:42.850 "name": "raid_bdev1", 00:13:42.850 "uuid": "3b05a282-42d0-11ef-96ac-773515fba644", 00:13:42.850 "strip_size_kb": 64, 00:13:42.850 "state": "online", 00:13:42.850 "raid_level": "raid0", 00:13:42.850 "superblock": true, 00:13:42.850 "num_base_bdevs": 4, 00:13:42.850 "num_base_bdevs_discovered": 4, 00:13:42.850 "num_base_bdevs_operational": 4, 00:13:42.850 "base_bdevs_list": [ 00:13:42.850 { 00:13:42.850 "name": "pt1", 00:13:42.850 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:42.850 "is_configured": true, 00:13:42.850 "data_offset": 2048, 00:13:42.850 "data_size": 63488 00:13:42.850 }, 00:13:42.850 { 00:13:42.850 "name": "pt2", 00:13:42.850 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:42.850 "is_configured": true, 00:13:42.850 "data_offset": 2048, 00:13:42.850 "data_size": 63488 00:13:42.850 }, 00:13:42.850 { 00:13:42.850 "name": "pt3", 00:13:42.851 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:42.851 "is_configured": true, 00:13:42.851 "data_offset": 2048, 00:13:42.851 "data_size": 63488 00:13:42.851 }, 00:13:42.851 { 00:13:42.851 "name": "pt4", 00:13:42.851 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:42.851 "is_configured": true, 00:13:42.851 "data_offset": 2048, 00:13:42.851 "data_size": 63488 00:13:42.851 } 00:13:42.851 ] 00:13:42.851 }' 00:13:42.851 17:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:42.851 17:32:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.108 17:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:13:43.108 17:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:43.108 17:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:43.108 17:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:43.108 17:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:43.108 17:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:43.108 17:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:43.108 17:32:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:43.366 [2024-07-15 17:32:39.111117] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:43.366 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:43.366 "name": "raid_bdev1", 00:13:43.366 "aliases": [ 00:13:43.366 "3b05a282-42d0-11ef-96ac-773515fba644" 00:13:43.366 ], 00:13:43.366 "product_name": "Raid Volume", 00:13:43.366 "block_size": 512, 00:13:43.366 "num_blocks": 253952, 00:13:43.366 "uuid": "3b05a282-42d0-11ef-96ac-773515fba644", 00:13:43.366 "assigned_rate_limits": { 00:13:43.366 "rw_ios_per_sec": 0, 00:13:43.366 "rw_mbytes_per_sec": 0, 00:13:43.366 "r_mbytes_per_sec": 0, 00:13:43.366 "w_mbytes_per_sec": 0 00:13:43.366 }, 00:13:43.366 "claimed": false, 00:13:43.366 "zoned": false, 00:13:43.366 "supported_io_types": { 00:13:43.366 "read": true, 00:13:43.366 "write": true, 00:13:43.366 "unmap": true, 00:13:43.366 "flush": true, 00:13:43.366 "reset": true, 00:13:43.366 "nvme_admin": false, 00:13:43.366 "nvme_io": false, 00:13:43.366 "nvme_io_md": false, 00:13:43.366 "write_zeroes": true, 00:13:43.366 "zcopy": false, 00:13:43.366 "get_zone_info": false, 00:13:43.366 "zone_management": false, 00:13:43.366 "zone_append": false, 00:13:43.366 "compare": false, 00:13:43.366 "compare_and_write": false, 00:13:43.366 "abort": false, 00:13:43.366 "seek_hole": false, 00:13:43.366 "seek_data": false, 00:13:43.366 "copy": false, 00:13:43.366 "nvme_iov_md": false 00:13:43.366 }, 00:13:43.366 "memory_domains": [ 00:13:43.366 { 00:13:43.366 "dma_device_id": "system", 00:13:43.366 "dma_device_type": 1 00:13:43.366 }, 00:13:43.366 { 00:13:43.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.366 "dma_device_type": 2 00:13:43.366 }, 00:13:43.366 { 00:13:43.366 "dma_device_id": "system", 00:13:43.366 "dma_device_type": 1 00:13:43.366 }, 00:13:43.366 { 00:13:43.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.366 "dma_device_type": 2 00:13:43.366 }, 00:13:43.366 { 00:13:43.366 "dma_device_id": "system", 00:13:43.366 "dma_device_type": 1 00:13:43.366 }, 00:13:43.366 { 00:13:43.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.366 "dma_device_type": 2 00:13:43.366 }, 00:13:43.366 { 00:13:43.366 "dma_device_id": "system", 00:13:43.366 "dma_device_type": 1 00:13:43.366 }, 00:13:43.366 { 00:13:43.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.366 "dma_device_type": 2 00:13:43.366 } 00:13:43.366 ], 00:13:43.366 "driver_specific": { 00:13:43.366 "raid": { 00:13:43.366 "uuid": "3b05a282-42d0-11ef-96ac-773515fba644", 00:13:43.366 "strip_size_kb": 64, 00:13:43.366 "state": "online", 00:13:43.366 "raid_level": "raid0", 00:13:43.366 "superblock": true, 00:13:43.366 "num_base_bdevs": 4, 00:13:43.366 "num_base_bdevs_discovered": 4, 00:13:43.366 "num_base_bdevs_operational": 4, 00:13:43.366 "base_bdevs_list": [ 00:13:43.366 { 00:13:43.366 "name": "pt1", 00:13:43.366 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:43.366 "is_configured": true, 00:13:43.366 "data_offset": 2048, 00:13:43.366 "data_size": 63488 00:13:43.366 }, 00:13:43.366 { 00:13:43.366 "name": "pt2", 00:13:43.366 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:43.366 "is_configured": true, 00:13:43.366 "data_offset": 2048, 00:13:43.366 "data_size": 63488 00:13:43.366 }, 00:13:43.366 { 00:13:43.366 "name": "pt3", 00:13:43.366 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:43.366 "is_configured": true, 00:13:43.366 "data_offset": 2048, 00:13:43.366 "data_size": 63488 00:13:43.366 }, 00:13:43.366 { 00:13:43.366 "name": "pt4", 00:13:43.366 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:43.367 "is_configured": true, 00:13:43.367 "data_offset": 2048, 00:13:43.367 "data_size": 63488 00:13:43.367 } 00:13:43.367 ] 00:13:43.367 } 00:13:43.367 } 00:13:43.367 }' 00:13:43.367 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:43.367 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:43.367 pt2 00:13:43.367 pt3 00:13:43.367 pt4' 00:13:43.367 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:43.367 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:43.367 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:43.624 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:43.624 "name": "pt1", 00:13:43.624 "aliases": [ 00:13:43.625 "00000000-0000-0000-0000-000000000001" 00:13:43.625 ], 00:13:43.625 "product_name": "passthru", 00:13:43.625 "block_size": 512, 00:13:43.625 "num_blocks": 65536, 00:13:43.625 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:43.625 "assigned_rate_limits": { 00:13:43.625 "rw_ios_per_sec": 0, 00:13:43.625 "rw_mbytes_per_sec": 0, 00:13:43.625 "r_mbytes_per_sec": 0, 00:13:43.625 "w_mbytes_per_sec": 0 00:13:43.625 }, 00:13:43.625 "claimed": true, 00:13:43.625 "claim_type": "exclusive_write", 00:13:43.625 "zoned": false, 00:13:43.625 "supported_io_types": { 00:13:43.625 "read": true, 00:13:43.625 "write": true, 00:13:43.625 "unmap": true, 00:13:43.625 "flush": true, 00:13:43.625 "reset": true, 00:13:43.625 "nvme_admin": false, 00:13:43.625 "nvme_io": false, 00:13:43.625 "nvme_io_md": false, 00:13:43.625 "write_zeroes": true, 00:13:43.625 "zcopy": true, 00:13:43.625 "get_zone_info": false, 00:13:43.625 "zone_management": false, 00:13:43.625 "zone_append": false, 00:13:43.625 "compare": false, 00:13:43.625 "compare_and_write": false, 00:13:43.625 "abort": true, 00:13:43.625 "seek_hole": false, 00:13:43.625 "seek_data": false, 00:13:43.625 "copy": true, 00:13:43.625 "nvme_iov_md": false 00:13:43.625 }, 00:13:43.625 "memory_domains": [ 00:13:43.625 { 00:13:43.625 "dma_device_id": "system", 00:13:43.625 "dma_device_type": 1 00:13:43.625 }, 00:13:43.625 { 00:13:43.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.625 "dma_device_type": 2 00:13:43.625 } 00:13:43.625 ], 00:13:43.625 "driver_specific": { 00:13:43.625 "passthru": { 00:13:43.625 "name": "pt1", 00:13:43.625 "base_bdev_name": "malloc1" 00:13:43.625 } 00:13:43.625 } 00:13:43.625 }' 00:13:43.625 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:43.625 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:43.625 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:43.625 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:43.625 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:43.625 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:43.625 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:43.625 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:43.625 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:43.882 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:43.882 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:43.882 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:43.883 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:43.883 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:43.883 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:44.140 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:44.140 "name": "pt2", 00:13:44.140 "aliases": [ 00:13:44.140 "00000000-0000-0000-0000-000000000002" 00:13:44.140 ], 00:13:44.140 "product_name": "passthru", 00:13:44.140 "block_size": 512, 00:13:44.140 "num_blocks": 65536, 00:13:44.140 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:44.140 "assigned_rate_limits": { 00:13:44.140 "rw_ios_per_sec": 0, 00:13:44.140 "rw_mbytes_per_sec": 0, 00:13:44.140 "r_mbytes_per_sec": 0, 00:13:44.140 "w_mbytes_per_sec": 0 00:13:44.140 }, 00:13:44.140 "claimed": true, 00:13:44.140 "claim_type": "exclusive_write", 00:13:44.140 "zoned": false, 00:13:44.140 "supported_io_types": { 00:13:44.140 "read": true, 00:13:44.140 "write": true, 00:13:44.140 "unmap": true, 00:13:44.140 "flush": true, 00:13:44.140 "reset": true, 00:13:44.140 "nvme_admin": false, 00:13:44.140 "nvme_io": false, 00:13:44.140 "nvme_io_md": false, 00:13:44.140 "write_zeroes": true, 00:13:44.140 "zcopy": true, 00:13:44.140 "get_zone_info": false, 00:13:44.140 "zone_management": false, 00:13:44.140 "zone_append": false, 00:13:44.140 "compare": false, 00:13:44.140 "compare_and_write": false, 00:13:44.140 "abort": true, 00:13:44.140 "seek_hole": false, 00:13:44.140 "seek_data": false, 00:13:44.140 "copy": true, 00:13:44.140 "nvme_iov_md": false 00:13:44.140 }, 00:13:44.140 "memory_domains": [ 00:13:44.140 { 00:13:44.140 "dma_device_id": "system", 00:13:44.140 "dma_device_type": 1 00:13:44.140 }, 00:13:44.140 { 00:13:44.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.140 "dma_device_type": 2 00:13:44.140 } 00:13:44.140 ], 00:13:44.140 "driver_specific": { 00:13:44.140 "passthru": { 00:13:44.140 "name": "pt2", 00:13:44.140 "base_bdev_name": "malloc2" 00:13:44.140 } 00:13:44.140 } 00:13:44.140 }' 00:13:44.140 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:44.140 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:44.140 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:44.140 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:44.140 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:44.140 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:44.140 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:44.140 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:44.140 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:44.140 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:44.140 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:44.140 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:44.140 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:44.140 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:13:44.140 17:32:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:44.398 17:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:44.398 "name": "pt3", 00:13:44.398 "aliases": [ 00:13:44.398 "00000000-0000-0000-0000-000000000003" 00:13:44.398 ], 00:13:44.398 "product_name": "passthru", 00:13:44.398 "block_size": 512, 00:13:44.398 "num_blocks": 65536, 00:13:44.398 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:44.398 "assigned_rate_limits": { 00:13:44.398 "rw_ios_per_sec": 0, 00:13:44.398 "rw_mbytes_per_sec": 0, 00:13:44.398 "r_mbytes_per_sec": 0, 00:13:44.398 "w_mbytes_per_sec": 0 00:13:44.398 }, 00:13:44.398 "claimed": true, 00:13:44.398 "claim_type": "exclusive_write", 00:13:44.398 "zoned": false, 00:13:44.398 "supported_io_types": { 00:13:44.398 "read": true, 00:13:44.398 "write": true, 00:13:44.398 "unmap": true, 00:13:44.398 "flush": true, 00:13:44.398 "reset": true, 00:13:44.398 "nvme_admin": false, 00:13:44.398 "nvme_io": false, 00:13:44.398 "nvme_io_md": false, 00:13:44.398 "write_zeroes": true, 00:13:44.398 "zcopy": true, 00:13:44.398 "get_zone_info": false, 00:13:44.398 "zone_management": false, 00:13:44.398 "zone_append": false, 00:13:44.398 "compare": false, 00:13:44.398 "compare_and_write": false, 00:13:44.398 "abort": true, 00:13:44.398 "seek_hole": false, 00:13:44.398 "seek_data": false, 00:13:44.398 "copy": true, 00:13:44.398 "nvme_iov_md": false 00:13:44.398 }, 00:13:44.398 "memory_domains": [ 00:13:44.398 { 00:13:44.398 "dma_device_id": "system", 00:13:44.398 "dma_device_type": 1 00:13:44.398 }, 00:13:44.398 { 00:13:44.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.398 "dma_device_type": 2 00:13:44.398 } 00:13:44.398 ], 00:13:44.398 "driver_specific": { 00:13:44.398 "passthru": { 00:13:44.398 "name": "pt3", 00:13:44.398 "base_bdev_name": "malloc3" 00:13:44.398 } 00:13:44.398 } 00:13:44.398 }' 00:13:44.398 17:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:44.398 17:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:44.398 17:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:44.398 17:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:44.398 17:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:44.398 17:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:44.398 17:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:44.398 17:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:44.398 17:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:44.398 17:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:44.398 17:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:44.398 17:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:44.398 17:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:44.398 17:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:13:44.398 17:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:44.656 17:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:44.656 "name": "pt4", 00:13:44.656 "aliases": [ 00:13:44.656 "00000000-0000-0000-0000-000000000004" 00:13:44.656 ], 00:13:44.656 "product_name": "passthru", 00:13:44.656 "block_size": 512, 00:13:44.656 "num_blocks": 65536, 00:13:44.656 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:44.656 "assigned_rate_limits": { 00:13:44.656 "rw_ios_per_sec": 0, 00:13:44.656 "rw_mbytes_per_sec": 0, 00:13:44.656 "r_mbytes_per_sec": 0, 00:13:44.656 "w_mbytes_per_sec": 0 00:13:44.656 }, 00:13:44.656 "claimed": true, 00:13:44.656 "claim_type": "exclusive_write", 00:13:44.656 "zoned": false, 00:13:44.656 "supported_io_types": { 00:13:44.656 "read": true, 00:13:44.656 "write": true, 00:13:44.656 "unmap": true, 00:13:44.656 "flush": true, 00:13:44.656 "reset": true, 00:13:44.656 "nvme_admin": false, 00:13:44.656 "nvme_io": false, 00:13:44.656 "nvme_io_md": false, 00:13:44.656 "write_zeroes": true, 00:13:44.656 "zcopy": true, 00:13:44.656 "get_zone_info": false, 00:13:44.656 "zone_management": false, 00:13:44.656 "zone_append": false, 00:13:44.656 "compare": false, 00:13:44.656 "compare_and_write": false, 00:13:44.656 "abort": true, 00:13:44.656 "seek_hole": false, 00:13:44.656 "seek_data": false, 00:13:44.656 "copy": true, 00:13:44.656 "nvme_iov_md": false 00:13:44.656 }, 00:13:44.656 "memory_domains": [ 00:13:44.656 { 00:13:44.656 "dma_device_id": "system", 00:13:44.656 "dma_device_type": 1 00:13:44.656 }, 00:13:44.656 { 00:13:44.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.656 "dma_device_type": 2 00:13:44.656 } 00:13:44.656 ], 00:13:44.656 "driver_specific": { 00:13:44.656 "passthru": { 00:13:44.656 "name": "pt4", 00:13:44.656 "base_bdev_name": "malloc4" 00:13:44.656 } 00:13:44.656 } 00:13:44.656 }' 00:13:44.656 17:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:44.656 17:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:44.656 17:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:44.656 17:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:44.656 17:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:44.656 17:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:44.656 17:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:44.656 17:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:44.914 17:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:44.914 17:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:44.914 17:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:44.914 17:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:44.914 17:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:13:44.914 17:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:45.172 [2024-07-15 17:32:40.763139] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:45.172 17:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=3b05a282-42d0-11ef-96ac-773515fba644 00:13:45.172 17:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 3b05a282-42d0-11ef-96ac-773515fba644 ']' 00:13:45.172 17:32:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:45.431 [2024-07-15 17:32:41.051109] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:45.431 [2024-07-15 17:32:41.051146] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:45.431 [2024-07-15 17:32:41.051178] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:45.431 [2024-07-15 17:32:41.051199] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:45.431 [2024-07-15 17:32:41.051206] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3249d0035900 name raid_bdev1, state offline 00:13:45.431 17:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:45.431 17:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:13:45.689 17:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:13:45.689 17:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:13:45.689 17:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:45.689 17:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:45.947 17:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:45.947 17:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:46.205 17:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:46.205 17:32:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:13:46.463 17:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:46.463 17:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:13:46.721 17:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:13:46.721 17:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:46.978 17:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:13:46.978 17:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:13:46.978 17:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:13:46.978 17:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:13:46.979 17:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:46.979 17:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:46.979 17:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:46.979 17:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:46.979 17:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:46.979 17:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:46.979 17:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:46.979 17:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:46.979 17:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:13:47.236 [2024-07-15 17:32:42.895134] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:47.236 [2024-07-15 17:32:42.895723] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:47.236 [2024-07-15 17:32:42.895743] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:47.236 [2024-07-15 17:32:42.895752] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:47.236 [2024-07-15 17:32:42.895766] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:47.237 [2024-07-15 17:32:42.895803] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:47.237 [2024-07-15 17:32:42.895815] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:47.237 [2024-07-15 17:32:42.895825] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:47.237 [2024-07-15 17:32:42.895833] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:47.237 [2024-07-15 17:32:42.895837] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3249d0035680 name raid_bdev1, state configuring 00:13:47.237 request: 00:13:47.237 { 00:13:47.237 "name": "raid_bdev1", 00:13:47.237 "raid_level": "raid0", 00:13:47.237 "base_bdevs": [ 00:13:47.237 "malloc1", 00:13:47.237 "malloc2", 00:13:47.237 "malloc3", 00:13:47.237 "malloc4" 00:13:47.237 ], 00:13:47.237 "strip_size_kb": 64, 00:13:47.237 "superblock": false, 00:13:47.237 "method": "bdev_raid_create", 00:13:47.237 "req_id": 1 00:13:47.237 } 00:13:47.237 Got JSON-RPC error response 00:13:47.237 response: 00:13:47.237 { 00:13:47.237 "code": -17, 00:13:47.237 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:47.237 } 00:13:47.237 17:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:13:47.237 17:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:47.237 17:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:47.237 17:32:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:47.237 17:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:13:47.237 17:32:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:47.494 17:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:13:47.494 17:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:13:47.494 17:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:47.751 [2024-07-15 17:32:43.411134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:47.752 [2024-07-15 17:32:43.411187] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.752 [2024-07-15 17:32:43.411199] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3249d0035180 00:13:47.752 [2024-07-15 17:32:43.411207] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.752 [2024-07-15 17:32:43.411859] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.752 [2024-07-15 17:32:43.411884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:47.752 [2024-07-15 17:32:43.411910] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:47.752 [2024-07-15 17:32:43.411921] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:47.752 pt1 00:13:47.752 17:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:13:47.752 17:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:47.752 17:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:47.752 17:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:47.752 17:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:47.752 17:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:47.752 17:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:47.752 17:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:47.752 17:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:47.752 17:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:47.752 17:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.752 17:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:48.010 17:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:48.010 "name": "raid_bdev1", 00:13:48.010 "uuid": "3b05a282-42d0-11ef-96ac-773515fba644", 00:13:48.010 "strip_size_kb": 64, 00:13:48.010 "state": "configuring", 00:13:48.010 "raid_level": "raid0", 00:13:48.010 "superblock": true, 00:13:48.010 "num_base_bdevs": 4, 00:13:48.010 "num_base_bdevs_discovered": 1, 00:13:48.010 "num_base_bdevs_operational": 4, 00:13:48.010 "base_bdevs_list": [ 00:13:48.010 { 00:13:48.010 "name": "pt1", 00:13:48.010 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:48.010 "is_configured": true, 00:13:48.010 "data_offset": 2048, 00:13:48.010 "data_size": 63488 00:13:48.010 }, 00:13:48.010 { 00:13:48.010 "name": null, 00:13:48.010 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:48.010 "is_configured": false, 00:13:48.010 "data_offset": 2048, 00:13:48.010 "data_size": 63488 00:13:48.010 }, 00:13:48.010 { 00:13:48.010 "name": null, 00:13:48.010 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:48.010 "is_configured": false, 00:13:48.010 "data_offset": 2048, 00:13:48.010 "data_size": 63488 00:13:48.010 }, 00:13:48.010 { 00:13:48.010 "name": null, 00:13:48.010 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:48.010 "is_configured": false, 00:13:48.010 "data_offset": 2048, 00:13:48.010 "data_size": 63488 00:13:48.010 } 00:13:48.010 ] 00:13:48.010 }' 00:13:48.010 17:32:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:48.010 17:32:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.267 17:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:13:48.267 17:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:48.526 [2024-07-15 17:32:44.319149] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:48.526 [2024-07-15 17:32:44.319201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.526 [2024-07-15 17:32:44.319212] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3249d0034780 00:13:48.526 [2024-07-15 17:32:44.319220] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.526 [2024-07-15 17:32:44.319336] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.526 [2024-07-15 17:32:44.319347] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:48.526 [2024-07-15 17:32:44.319370] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:48.526 [2024-07-15 17:32:44.319380] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:48.526 pt2 00:13:48.526 17:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:48.785 [2024-07-15 17:32:44.559155] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:48.785 17:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:13:48.785 17:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:48.785 17:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:48.785 17:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:48.785 17:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:48.785 17:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:48.785 17:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:48.785 17:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:48.785 17:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:48.785 17:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:48.785 17:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.785 17:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:49.043 17:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:49.043 "name": "raid_bdev1", 00:13:49.043 "uuid": "3b05a282-42d0-11ef-96ac-773515fba644", 00:13:49.043 "strip_size_kb": 64, 00:13:49.044 "state": "configuring", 00:13:49.044 "raid_level": "raid0", 00:13:49.044 "superblock": true, 00:13:49.044 "num_base_bdevs": 4, 00:13:49.044 "num_base_bdevs_discovered": 1, 00:13:49.044 "num_base_bdevs_operational": 4, 00:13:49.044 "base_bdevs_list": [ 00:13:49.044 { 00:13:49.044 "name": "pt1", 00:13:49.044 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:49.044 "is_configured": true, 00:13:49.044 "data_offset": 2048, 00:13:49.044 "data_size": 63488 00:13:49.044 }, 00:13:49.044 { 00:13:49.044 "name": null, 00:13:49.044 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:49.044 "is_configured": false, 00:13:49.044 "data_offset": 2048, 00:13:49.044 "data_size": 63488 00:13:49.044 }, 00:13:49.044 { 00:13:49.044 "name": null, 00:13:49.044 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:49.044 "is_configured": false, 00:13:49.044 "data_offset": 2048, 00:13:49.044 "data_size": 63488 00:13:49.044 }, 00:13:49.044 { 00:13:49.044 "name": null, 00:13:49.044 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:49.044 "is_configured": false, 00:13:49.044 "data_offset": 2048, 00:13:49.044 "data_size": 63488 00:13:49.044 } 00:13:49.044 ] 00:13:49.044 }' 00:13:49.044 17:32:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:49.044 17:32:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.610 17:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:13:49.610 17:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:49.610 17:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:49.610 [2024-07-15 17:32:45.367167] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:49.610 [2024-07-15 17:32:45.367219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.610 [2024-07-15 17:32:45.367231] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3249d0034780 00:13:49.610 [2024-07-15 17:32:45.367239] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.610 [2024-07-15 17:32:45.367356] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.610 [2024-07-15 17:32:45.367367] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:49.610 [2024-07-15 17:32:45.367391] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:49.610 [2024-07-15 17:32:45.367400] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:49.610 pt2 00:13:49.610 17:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:13:49.610 17:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:49.610 17:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:49.868 [2024-07-15 17:32:45.671170] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:49.868 [2024-07-15 17:32:45.671220] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.868 [2024-07-15 17:32:45.671233] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3249d0035b80 00:13:49.868 [2024-07-15 17:32:45.671241] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.868 [2024-07-15 17:32:45.671355] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.868 [2024-07-15 17:32:45.671365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:49.868 [2024-07-15 17:32:45.671387] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:49.868 [2024-07-15 17:32:45.671396] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:49.868 pt3 00:13:49.868 17:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:13:49.868 17:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:49.868 17:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:50.125 [2024-07-15 17:32:45.939173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:50.125 [2024-07-15 17:32:45.939219] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.125 [2024-07-15 17:32:45.939231] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3249d0035900 00:13:50.125 [2024-07-15 17:32:45.939239] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.125 [2024-07-15 17:32:45.939347] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.125 [2024-07-15 17:32:45.939358] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:50.125 [2024-07-15 17:32:45.939380] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:50.125 [2024-07-15 17:32:45.939388] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:50.125 [2024-07-15 17:32:45.939427] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3249d0034c80 00:13:50.125 [2024-07-15 17:32:45.939432] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:50.125 [2024-07-15 17:32:45.939453] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3249d0097e20 00:13:50.125 [2024-07-15 17:32:45.939506] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3249d0034c80 00:13:50.125 [2024-07-15 17:32:45.939511] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3249d0034c80 00:13:50.125 [2024-07-15 17:32:45.939540] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.125 pt4 00:13:50.125 17:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:13:50.125 17:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:50.125 17:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:50.125 17:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:50.125 17:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:50.125 17:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:50.125 17:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:50.125 17:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:50.125 17:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:50.125 17:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:50.125 17:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:50.125 17:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:50.382 17:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:50.383 17:32:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.641 17:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:50.641 "name": "raid_bdev1", 00:13:50.641 "uuid": "3b05a282-42d0-11ef-96ac-773515fba644", 00:13:50.641 "strip_size_kb": 64, 00:13:50.641 "state": "online", 00:13:50.641 "raid_level": "raid0", 00:13:50.641 "superblock": true, 00:13:50.641 "num_base_bdevs": 4, 00:13:50.641 "num_base_bdevs_discovered": 4, 00:13:50.641 "num_base_bdevs_operational": 4, 00:13:50.641 "base_bdevs_list": [ 00:13:50.641 { 00:13:50.641 "name": "pt1", 00:13:50.641 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:50.641 "is_configured": true, 00:13:50.641 "data_offset": 2048, 00:13:50.641 "data_size": 63488 00:13:50.641 }, 00:13:50.641 { 00:13:50.641 "name": "pt2", 00:13:50.641 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:50.641 "is_configured": true, 00:13:50.641 "data_offset": 2048, 00:13:50.641 "data_size": 63488 00:13:50.641 }, 00:13:50.641 { 00:13:50.641 "name": "pt3", 00:13:50.641 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:50.641 "is_configured": true, 00:13:50.641 "data_offset": 2048, 00:13:50.641 "data_size": 63488 00:13:50.641 }, 00:13:50.641 { 00:13:50.641 "name": "pt4", 00:13:50.641 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:50.641 "is_configured": true, 00:13:50.641 "data_offset": 2048, 00:13:50.641 "data_size": 63488 00:13:50.641 } 00:13:50.641 ] 00:13:50.641 }' 00:13:50.641 17:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:50.641 17:32:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.899 17:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:13:50.899 17:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:50.899 17:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:50.899 17:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:50.899 17:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:50.899 17:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:50.899 17:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:50.899 17:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:51.236 [2024-07-15 17:32:46.751232] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:51.236 17:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:51.236 "name": "raid_bdev1", 00:13:51.236 "aliases": [ 00:13:51.236 "3b05a282-42d0-11ef-96ac-773515fba644" 00:13:51.236 ], 00:13:51.236 "product_name": "Raid Volume", 00:13:51.236 "block_size": 512, 00:13:51.236 "num_blocks": 253952, 00:13:51.236 "uuid": "3b05a282-42d0-11ef-96ac-773515fba644", 00:13:51.236 "assigned_rate_limits": { 00:13:51.236 "rw_ios_per_sec": 0, 00:13:51.236 "rw_mbytes_per_sec": 0, 00:13:51.236 "r_mbytes_per_sec": 0, 00:13:51.236 "w_mbytes_per_sec": 0 00:13:51.236 }, 00:13:51.236 "claimed": false, 00:13:51.236 "zoned": false, 00:13:51.236 "supported_io_types": { 00:13:51.236 "read": true, 00:13:51.236 "write": true, 00:13:51.236 "unmap": true, 00:13:51.236 "flush": true, 00:13:51.236 "reset": true, 00:13:51.236 "nvme_admin": false, 00:13:51.236 "nvme_io": false, 00:13:51.236 "nvme_io_md": false, 00:13:51.236 "write_zeroes": true, 00:13:51.236 "zcopy": false, 00:13:51.236 "get_zone_info": false, 00:13:51.236 "zone_management": false, 00:13:51.236 "zone_append": false, 00:13:51.236 "compare": false, 00:13:51.236 "compare_and_write": false, 00:13:51.236 "abort": false, 00:13:51.236 "seek_hole": false, 00:13:51.236 "seek_data": false, 00:13:51.236 "copy": false, 00:13:51.237 "nvme_iov_md": false 00:13:51.237 }, 00:13:51.237 "memory_domains": [ 00:13:51.237 { 00:13:51.237 "dma_device_id": "system", 00:13:51.237 "dma_device_type": 1 00:13:51.237 }, 00:13:51.237 { 00:13:51.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.237 "dma_device_type": 2 00:13:51.237 }, 00:13:51.237 { 00:13:51.237 "dma_device_id": "system", 00:13:51.237 "dma_device_type": 1 00:13:51.237 }, 00:13:51.237 { 00:13:51.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.237 "dma_device_type": 2 00:13:51.237 }, 00:13:51.237 { 00:13:51.237 "dma_device_id": "system", 00:13:51.237 "dma_device_type": 1 00:13:51.237 }, 00:13:51.237 { 00:13:51.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.237 "dma_device_type": 2 00:13:51.237 }, 00:13:51.237 { 00:13:51.237 "dma_device_id": "system", 00:13:51.237 "dma_device_type": 1 00:13:51.237 }, 00:13:51.237 { 00:13:51.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.237 "dma_device_type": 2 00:13:51.237 } 00:13:51.237 ], 00:13:51.237 "driver_specific": { 00:13:51.237 "raid": { 00:13:51.237 "uuid": "3b05a282-42d0-11ef-96ac-773515fba644", 00:13:51.237 "strip_size_kb": 64, 00:13:51.237 "state": "online", 00:13:51.237 "raid_level": "raid0", 00:13:51.237 "superblock": true, 00:13:51.237 "num_base_bdevs": 4, 00:13:51.237 "num_base_bdevs_discovered": 4, 00:13:51.237 "num_base_bdevs_operational": 4, 00:13:51.237 "base_bdevs_list": [ 00:13:51.237 { 00:13:51.237 "name": "pt1", 00:13:51.237 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:51.237 "is_configured": true, 00:13:51.237 "data_offset": 2048, 00:13:51.237 "data_size": 63488 00:13:51.237 }, 00:13:51.237 { 00:13:51.237 "name": "pt2", 00:13:51.237 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:51.237 "is_configured": true, 00:13:51.237 "data_offset": 2048, 00:13:51.237 "data_size": 63488 00:13:51.237 }, 00:13:51.237 { 00:13:51.237 "name": "pt3", 00:13:51.237 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:51.237 "is_configured": true, 00:13:51.237 "data_offset": 2048, 00:13:51.237 "data_size": 63488 00:13:51.237 }, 00:13:51.237 { 00:13:51.237 "name": "pt4", 00:13:51.237 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:51.237 "is_configured": true, 00:13:51.237 "data_offset": 2048, 00:13:51.237 "data_size": 63488 00:13:51.237 } 00:13:51.237 ] 00:13:51.237 } 00:13:51.237 } 00:13:51.237 }' 00:13:51.237 17:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:51.237 17:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:51.237 pt2 00:13:51.237 pt3 00:13:51.237 pt4' 00:13:51.237 17:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:51.237 17:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:51.237 17:32:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:51.237 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:51.237 "name": "pt1", 00:13:51.237 "aliases": [ 00:13:51.237 "00000000-0000-0000-0000-000000000001" 00:13:51.237 ], 00:13:51.237 "product_name": "passthru", 00:13:51.237 "block_size": 512, 00:13:51.237 "num_blocks": 65536, 00:13:51.237 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:51.237 "assigned_rate_limits": { 00:13:51.237 "rw_ios_per_sec": 0, 00:13:51.237 "rw_mbytes_per_sec": 0, 00:13:51.237 "r_mbytes_per_sec": 0, 00:13:51.237 "w_mbytes_per_sec": 0 00:13:51.237 }, 00:13:51.237 "claimed": true, 00:13:51.237 "claim_type": "exclusive_write", 00:13:51.237 "zoned": false, 00:13:51.237 "supported_io_types": { 00:13:51.237 "read": true, 00:13:51.237 "write": true, 00:13:51.237 "unmap": true, 00:13:51.237 "flush": true, 00:13:51.237 "reset": true, 00:13:51.237 "nvme_admin": false, 00:13:51.237 "nvme_io": false, 00:13:51.237 "nvme_io_md": false, 00:13:51.237 "write_zeroes": true, 00:13:51.237 "zcopy": true, 00:13:51.237 "get_zone_info": false, 00:13:51.237 "zone_management": false, 00:13:51.237 "zone_append": false, 00:13:51.237 "compare": false, 00:13:51.237 "compare_and_write": false, 00:13:51.237 "abort": true, 00:13:51.237 "seek_hole": false, 00:13:51.237 "seek_data": false, 00:13:51.237 "copy": true, 00:13:51.237 "nvme_iov_md": false 00:13:51.237 }, 00:13:51.237 "memory_domains": [ 00:13:51.237 { 00:13:51.237 "dma_device_id": "system", 00:13:51.237 "dma_device_type": 1 00:13:51.237 }, 00:13:51.237 { 00:13:51.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.237 "dma_device_type": 2 00:13:51.237 } 00:13:51.237 ], 00:13:51.237 "driver_specific": { 00:13:51.237 "passthru": { 00:13:51.237 "name": "pt1", 00:13:51.237 "base_bdev_name": "malloc1" 00:13:51.237 } 00:13:51.237 } 00:13:51.237 }' 00:13:51.237 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:51.237 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:51.237 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:51.237 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:51.237 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:51.237 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:51.237 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:51.237 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:51.237 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:51.237 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:51.237 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:51.495 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:51.495 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:51.495 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:51.495 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:51.753 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:51.753 "name": "pt2", 00:13:51.753 "aliases": [ 00:13:51.753 "00000000-0000-0000-0000-000000000002" 00:13:51.753 ], 00:13:51.753 "product_name": "passthru", 00:13:51.753 "block_size": 512, 00:13:51.753 "num_blocks": 65536, 00:13:51.753 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:51.753 "assigned_rate_limits": { 00:13:51.753 "rw_ios_per_sec": 0, 00:13:51.753 "rw_mbytes_per_sec": 0, 00:13:51.753 "r_mbytes_per_sec": 0, 00:13:51.753 "w_mbytes_per_sec": 0 00:13:51.753 }, 00:13:51.753 "claimed": true, 00:13:51.753 "claim_type": "exclusive_write", 00:13:51.753 "zoned": false, 00:13:51.753 "supported_io_types": { 00:13:51.753 "read": true, 00:13:51.753 "write": true, 00:13:51.753 "unmap": true, 00:13:51.753 "flush": true, 00:13:51.753 "reset": true, 00:13:51.753 "nvme_admin": false, 00:13:51.753 "nvme_io": false, 00:13:51.753 "nvme_io_md": false, 00:13:51.753 "write_zeroes": true, 00:13:51.753 "zcopy": true, 00:13:51.753 "get_zone_info": false, 00:13:51.753 "zone_management": false, 00:13:51.753 "zone_append": false, 00:13:51.753 "compare": false, 00:13:51.753 "compare_and_write": false, 00:13:51.753 "abort": true, 00:13:51.753 "seek_hole": false, 00:13:51.753 "seek_data": false, 00:13:51.753 "copy": true, 00:13:51.753 "nvme_iov_md": false 00:13:51.753 }, 00:13:51.753 "memory_domains": [ 00:13:51.753 { 00:13:51.753 "dma_device_id": "system", 00:13:51.753 "dma_device_type": 1 00:13:51.753 }, 00:13:51.754 { 00:13:51.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.754 "dma_device_type": 2 00:13:51.754 } 00:13:51.754 ], 00:13:51.754 "driver_specific": { 00:13:51.754 "passthru": { 00:13:51.754 "name": "pt2", 00:13:51.754 "base_bdev_name": "malloc2" 00:13:51.754 } 00:13:51.754 } 00:13:51.754 }' 00:13:51.754 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:51.754 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:51.754 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:51.754 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:51.754 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:51.754 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:51.754 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:51.754 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:51.754 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:51.754 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:51.754 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:51.754 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:51.754 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:51.754 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:13:51.754 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:52.012 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:52.012 "name": "pt3", 00:13:52.012 "aliases": [ 00:13:52.012 "00000000-0000-0000-0000-000000000003" 00:13:52.012 ], 00:13:52.012 "product_name": "passthru", 00:13:52.012 "block_size": 512, 00:13:52.012 "num_blocks": 65536, 00:13:52.012 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:52.012 "assigned_rate_limits": { 00:13:52.012 "rw_ios_per_sec": 0, 00:13:52.012 "rw_mbytes_per_sec": 0, 00:13:52.012 "r_mbytes_per_sec": 0, 00:13:52.012 "w_mbytes_per_sec": 0 00:13:52.012 }, 00:13:52.012 "claimed": true, 00:13:52.012 "claim_type": "exclusive_write", 00:13:52.012 "zoned": false, 00:13:52.012 "supported_io_types": { 00:13:52.012 "read": true, 00:13:52.012 "write": true, 00:13:52.012 "unmap": true, 00:13:52.012 "flush": true, 00:13:52.012 "reset": true, 00:13:52.012 "nvme_admin": false, 00:13:52.012 "nvme_io": false, 00:13:52.012 "nvme_io_md": false, 00:13:52.012 "write_zeroes": true, 00:13:52.012 "zcopy": true, 00:13:52.012 "get_zone_info": false, 00:13:52.012 "zone_management": false, 00:13:52.012 "zone_append": false, 00:13:52.012 "compare": false, 00:13:52.012 "compare_and_write": false, 00:13:52.012 "abort": true, 00:13:52.012 "seek_hole": false, 00:13:52.012 "seek_data": false, 00:13:52.012 "copy": true, 00:13:52.012 "nvme_iov_md": false 00:13:52.012 }, 00:13:52.012 "memory_domains": [ 00:13:52.012 { 00:13:52.012 "dma_device_id": "system", 00:13:52.012 "dma_device_type": 1 00:13:52.012 }, 00:13:52.012 { 00:13:52.012 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.012 "dma_device_type": 2 00:13:52.012 } 00:13:52.012 ], 00:13:52.012 "driver_specific": { 00:13:52.012 "passthru": { 00:13:52.012 "name": "pt3", 00:13:52.012 "base_bdev_name": "malloc3" 00:13:52.012 } 00:13:52.012 } 00:13:52.012 }' 00:13:52.012 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:52.012 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:52.012 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:52.012 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:52.012 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:52.012 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:52.012 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:52.012 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:52.012 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:52.012 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:52.012 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:52.012 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:52.012 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:52.012 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:52.012 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:13:52.270 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:52.270 "name": "pt4", 00:13:52.270 "aliases": [ 00:13:52.270 "00000000-0000-0000-0000-000000000004" 00:13:52.270 ], 00:13:52.270 "product_name": "passthru", 00:13:52.270 "block_size": 512, 00:13:52.270 "num_blocks": 65536, 00:13:52.270 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:52.271 "assigned_rate_limits": { 00:13:52.271 "rw_ios_per_sec": 0, 00:13:52.271 "rw_mbytes_per_sec": 0, 00:13:52.271 "r_mbytes_per_sec": 0, 00:13:52.271 "w_mbytes_per_sec": 0 00:13:52.271 }, 00:13:52.271 "claimed": true, 00:13:52.271 "claim_type": "exclusive_write", 00:13:52.271 "zoned": false, 00:13:52.271 "supported_io_types": { 00:13:52.271 "read": true, 00:13:52.271 "write": true, 00:13:52.271 "unmap": true, 00:13:52.271 "flush": true, 00:13:52.271 "reset": true, 00:13:52.271 "nvme_admin": false, 00:13:52.271 "nvme_io": false, 00:13:52.271 "nvme_io_md": false, 00:13:52.271 "write_zeroes": true, 00:13:52.271 "zcopy": true, 00:13:52.271 "get_zone_info": false, 00:13:52.271 "zone_management": false, 00:13:52.271 "zone_append": false, 00:13:52.271 "compare": false, 00:13:52.271 "compare_and_write": false, 00:13:52.271 "abort": true, 00:13:52.271 "seek_hole": false, 00:13:52.271 "seek_data": false, 00:13:52.271 "copy": true, 00:13:52.271 "nvme_iov_md": false 00:13:52.271 }, 00:13:52.271 "memory_domains": [ 00:13:52.271 { 00:13:52.271 "dma_device_id": "system", 00:13:52.271 "dma_device_type": 1 00:13:52.271 }, 00:13:52.271 { 00:13:52.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:52.271 "dma_device_type": 2 00:13:52.271 } 00:13:52.271 ], 00:13:52.271 "driver_specific": { 00:13:52.271 "passthru": { 00:13:52.271 "name": "pt4", 00:13:52.271 "base_bdev_name": "malloc4" 00:13:52.271 } 00:13:52.271 } 00:13:52.271 }' 00:13:52.271 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:52.271 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:52.271 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:52.271 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:52.271 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:52.271 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:52.271 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:52.271 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:52.271 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:52.271 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:52.271 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:52.271 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:52.271 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:52.271 17:32:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:13:52.529 [2024-07-15 17:32:48.215256] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:52.529 17:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 3b05a282-42d0-11ef-96ac-773515fba644 '!=' 3b05a282-42d0-11ef-96ac-773515fba644 ']' 00:13:52.529 17:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:13:52.529 17:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:52.529 17:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:52.529 17:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 60004 00:13:52.529 17:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 60004 ']' 00:13:52.529 17:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 60004 00:13:52.529 17:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:13:52.529 17:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:13:52.529 17:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 60004 00:13:52.529 17:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:13:52.529 17:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:13:52.529 killing process with pid 60004 00:13:52.529 17:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:13:52.529 17:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60004' 00:13:52.529 17:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 60004 00:13:52.529 [2024-07-15 17:32:48.241495] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:52.529 [2024-07-15 17:32:48.241520] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:52.529 [2024-07-15 17:32:48.241536] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:52.529 [2024-07-15 17:32:48.241541] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3249d0034c80 name raid_bdev1, state offline 00:13:52.529 17:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 60004 00:13:52.529 [2024-07-15 17:32:48.265469] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:52.788 17:32:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:13:52.788 00:13:52.788 real 0m13.626s 00:13:52.788 user 0m24.284s 00:13:52.788 sys 0m2.146s 00:13:52.788 ************************************ 00:13:52.788 END TEST raid_superblock_test 00:13:52.788 ************************************ 00:13:52.788 17:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:52.788 17:32:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.788 17:32:48 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:52.788 17:32:48 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:13:52.788 17:32:48 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:52.788 17:32:48 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:52.788 17:32:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:52.788 ************************************ 00:13:52.788 START TEST raid_read_error_test 00:13:52.788 ************************************ 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 read 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.pevMclb3Qe 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=60405 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 60405 /var/tmp/spdk-raid.sock 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 60405 ']' 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:52.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:52.788 17:32:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.788 [2024-07-15 17:32:48.504285] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:13:52.788 [2024-07-15 17:32:48.504563] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:13:53.353 EAL: TSC is not safe to use in SMP mode 00:13:53.353 EAL: TSC is not invariant 00:13:53.353 [2024-07-15 17:32:49.023089] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:53.353 [2024-07-15 17:32:49.112322] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:53.353 [2024-07-15 17:32:49.114520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.353 [2024-07-15 17:32:49.115329] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:53.353 [2024-07-15 17:32:49.115344] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:53.919 17:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:53.919 17:32:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:13:53.919 17:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:53.919 17:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:54.177 BaseBdev1_malloc 00:13:54.177 17:32:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:54.435 true 00:13:54.436 17:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:54.694 [2024-07-15 17:32:50.335743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:54.694 [2024-07-15 17:32:50.335809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.694 [2024-07-15 17:32:50.335838] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfc046234780 00:13:54.694 [2024-07-15 17:32:50.335847] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.694 [2024-07-15 17:32:50.336576] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.694 [2024-07-15 17:32:50.336601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:54.694 BaseBdev1 00:13:54.694 17:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:54.694 17:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:54.952 BaseBdev2_malloc 00:13:54.952 17:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:55.209 true 00:13:55.209 17:32:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:55.467 [2024-07-15 17:32:51.091744] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:55.467 [2024-07-15 17:32:51.091800] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.467 [2024-07-15 17:32:51.091828] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfc046234c80 00:13:55.467 [2024-07-15 17:32:51.091837] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.467 [2024-07-15 17:32:51.092558] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.467 [2024-07-15 17:32:51.092595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:55.467 BaseBdev2 00:13:55.467 17:32:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:55.467 17:32:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:55.725 BaseBdev3_malloc 00:13:55.725 17:32:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:13:55.983 true 00:13:55.983 17:32:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:56.241 [2024-07-15 17:32:51.819749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:56.241 [2024-07-15 17:32:51.819809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.241 [2024-07-15 17:32:51.819836] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfc046235180 00:13:56.241 [2024-07-15 17:32:51.819845] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.241 [2024-07-15 17:32:51.820535] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.241 [2024-07-15 17:32:51.820575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:56.241 BaseBdev3 00:13:56.241 17:32:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:56.241 17:32:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:56.241 BaseBdev4_malloc 00:13:56.499 17:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:13:56.758 true 00:13:56.758 17:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:57.016 [2024-07-15 17:32:52.615756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:57.016 [2024-07-15 17:32:52.615809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:57.016 [2024-07-15 17:32:52.615837] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfc046235680 00:13:57.016 [2024-07-15 17:32:52.615846] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:57.016 [2024-07-15 17:32:52.616535] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:57.016 [2024-07-15 17:32:52.616567] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:57.016 BaseBdev4 00:13:57.016 17:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:13:57.274 [2024-07-15 17:32:52.847766] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:57.274 [2024-07-15 17:32:52.848364] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:57.274 [2024-07-15 17:32:52.848389] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:57.274 [2024-07-15 17:32:52.848403] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:57.274 [2024-07-15 17:32:52.848469] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xfc046235900 00:13:57.274 [2024-07-15 17:32:52.848475] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:57.274 [2024-07-15 17:32:52.848514] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xfc0462a0e20 00:13:57.274 [2024-07-15 17:32:52.848599] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xfc046235900 00:13:57.274 [2024-07-15 17:32:52.848604] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xfc046235900 00:13:57.274 [2024-07-15 17:32:52.848632] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.274 17:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:57.274 17:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:57.274 17:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:57.274 17:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:57.274 17:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:57.274 17:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:57.274 17:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:57.274 17:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:57.274 17:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:57.274 17:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:57.274 17:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:57.274 17:32:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.532 17:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:57.532 "name": "raid_bdev1", 00:13:57.532 "uuid": "43b71597-42d0-11ef-96ac-773515fba644", 00:13:57.532 "strip_size_kb": 64, 00:13:57.532 "state": "online", 00:13:57.532 "raid_level": "raid0", 00:13:57.532 "superblock": true, 00:13:57.532 "num_base_bdevs": 4, 00:13:57.532 "num_base_bdevs_discovered": 4, 00:13:57.532 "num_base_bdevs_operational": 4, 00:13:57.532 "base_bdevs_list": [ 00:13:57.532 { 00:13:57.532 "name": "BaseBdev1", 00:13:57.532 "uuid": "4518bba1-30cd-bc5a-8b5b-20d94451e851", 00:13:57.532 "is_configured": true, 00:13:57.532 "data_offset": 2048, 00:13:57.532 "data_size": 63488 00:13:57.532 }, 00:13:57.532 { 00:13:57.532 "name": "BaseBdev2", 00:13:57.532 "uuid": "6bb886d0-b6ab-355d-8989-2deff5f28e84", 00:13:57.532 "is_configured": true, 00:13:57.532 "data_offset": 2048, 00:13:57.532 "data_size": 63488 00:13:57.532 }, 00:13:57.532 { 00:13:57.532 "name": "BaseBdev3", 00:13:57.532 "uuid": "7ddab747-123b-2e5a-a56c-d8448a75164f", 00:13:57.532 "is_configured": true, 00:13:57.532 "data_offset": 2048, 00:13:57.532 "data_size": 63488 00:13:57.532 }, 00:13:57.532 { 00:13:57.533 "name": "BaseBdev4", 00:13:57.533 "uuid": "27a891b2-595b-025e-83c7-56e3022f818c", 00:13:57.533 "is_configured": true, 00:13:57.533 "data_offset": 2048, 00:13:57.533 "data_size": 63488 00:13:57.533 } 00:13:57.533 ] 00:13:57.533 }' 00:13:57.533 17:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:57.533 17:32:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.789 17:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:13:57.789 17:32:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:57.789 [2024-07-15 17:32:53.523964] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xfc0462a0ec0 00:13:58.723 17:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:58.980 17:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:13:58.980 17:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:58.980 17:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:13:58.980 17:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:58.980 17:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:58.980 17:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:58.980 17:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:58.980 17:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:58.980 17:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:58.980 17:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:58.980 17:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:58.980 17:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:58.980 17:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:58.980 17:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:58.980 17:32:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.238 17:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:59.238 "name": "raid_bdev1", 00:13:59.238 "uuid": "43b71597-42d0-11ef-96ac-773515fba644", 00:13:59.238 "strip_size_kb": 64, 00:13:59.238 "state": "online", 00:13:59.238 "raid_level": "raid0", 00:13:59.238 "superblock": true, 00:13:59.238 "num_base_bdevs": 4, 00:13:59.238 "num_base_bdevs_discovered": 4, 00:13:59.238 "num_base_bdevs_operational": 4, 00:13:59.238 "base_bdevs_list": [ 00:13:59.238 { 00:13:59.238 "name": "BaseBdev1", 00:13:59.238 "uuid": "4518bba1-30cd-bc5a-8b5b-20d94451e851", 00:13:59.238 "is_configured": true, 00:13:59.238 "data_offset": 2048, 00:13:59.238 "data_size": 63488 00:13:59.238 }, 00:13:59.238 { 00:13:59.238 "name": "BaseBdev2", 00:13:59.238 "uuid": "6bb886d0-b6ab-355d-8989-2deff5f28e84", 00:13:59.238 "is_configured": true, 00:13:59.238 "data_offset": 2048, 00:13:59.238 "data_size": 63488 00:13:59.238 }, 00:13:59.238 { 00:13:59.238 "name": "BaseBdev3", 00:13:59.238 "uuid": "7ddab747-123b-2e5a-a56c-d8448a75164f", 00:13:59.238 "is_configured": true, 00:13:59.238 "data_offset": 2048, 00:13:59.238 "data_size": 63488 00:13:59.238 }, 00:13:59.238 { 00:13:59.238 "name": "BaseBdev4", 00:13:59.238 "uuid": "27a891b2-595b-025e-83c7-56e3022f818c", 00:13:59.238 "is_configured": true, 00:13:59.238 "data_offset": 2048, 00:13:59.238 "data_size": 63488 00:13:59.238 } 00:13:59.238 ] 00:13:59.238 }' 00:13:59.238 17:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:59.239 17:32:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.803 17:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:59.803 [2024-07-15 17:32:55.630517] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:59.803 [2024-07-15 17:32:55.630546] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:59.803 [2024-07-15 17:32:55.630885] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:59.803 [2024-07-15 17:32:55.630896] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.803 [2024-07-15 17:32:55.630905] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:59.803 [2024-07-15 17:32:55.630909] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xfc046235900 name raid_bdev1, state offline 00:13:59.803 0 00:14:00.061 17:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 60405 00:14:00.061 17:32:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 60405 ']' 00:14:00.061 17:32:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 60405 00:14:00.061 17:32:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:14:00.061 17:32:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:14:00.061 17:32:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:14:00.061 17:32:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 60405 00:14:00.061 17:32:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:14:00.061 17:32:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:14:00.061 17:32:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60405' 00:14:00.061 killing process with pid 60405 00:14:00.061 17:32:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 60405 00:14:00.061 [2024-07-15 17:32:55.660436] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:00.061 17:32:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 60405 00:14:00.061 [2024-07-15 17:32:55.684187] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:00.061 17:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.pevMclb3Qe 00:14:00.061 17:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:14:00.061 17:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:14:00.061 17:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.47 00:14:00.061 17:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:14:00.061 17:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:00.061 17:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:00.061 17:32:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.47 != \0\.\0\0 ]] 00:14:00.061 00:14:00.061 real 0m7.381s 00:14:00.061 user 0m11.881s 00:14:00.061 sys 0m1.109s 00:14:00.061 17:32:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:00.061 17:32:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.061 ************************************ 00:14:00.061 END TEST raid_read_error_test 00:14:00.061 ************************************ 00:14:00.318 17:32:55 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:00.318 17:32:55 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:14:00.318 17:32:55 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:00.318 17:32:55 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:00.318 17:32:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:00.318 ************************************ 00:14:00.318 START TEST raid_write_error_test 00:14:00.318 ************************************ 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 write 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.Vvb4uWX2ez 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=60543 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 60543 /var/tmp/spdk-raid.sock 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 60543 ']' 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:00.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:00.318 17:32:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.318 [2024-07-15 17:32:55.931006] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:14:00.318 [2024-07-15 17:32:55.931191] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:14:00.881 EAL: TSC is not safe to use in SMP mode 00:14:00.881 EAL: TSC is not invariant 00:14:00.881 [2024-07-15 17:32:56.482585] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.881 [2024-07-15 17:32:56.570458] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:14:00.881 [2024-07-15 17:32:56.572587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.881 [2024-07-15 17:32:56.573411] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:00.881 [2024-07-15 17:32:56.573430] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:01.447 17:32:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:01.447 17:32:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:14:01.447 17:32:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:01.447 17:32:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:01.447 BaseBdev1_malloc 00:14:01.447 17:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:01.704 true 00:14:01.704 17:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:01.962 [2024-07-15 17:32:57.685564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:01.962 [2024-07-15 17:32:57.685623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.962 [2024-07-15 17:32:57.685651] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x195b83e34780 00:14:01.962 [2024-07-15 17:32:57.685660] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.962 [2024-07-15 17:32:57.686339] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.962 [2024-07-15 17:32:57.686365] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:01.962 BaseBdev1 00:14:01.962 17:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:01.962 17:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:02.220 BaseBdev2_malloc 00:14:02.220 17:32:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:02.478 true 00:14:02.478 17:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:02.736 [2024-07-15 17:32:58.449581] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:02.736 [2024-07-15 17:32:58.449636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.736 [2024-07-15 17:32:58.449665] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x195b83e34c80 00:14:02.736 [2024-07-15 17:32:58.449674] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.736 [2024-07-15 17:32:58.450381] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.736 [2024-07-15 17:32:58.450416] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:02.736 BaseBdev2 00:14:02.736 17:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:02.736 17:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:02.994 BaseBdev3_malloc 00:14:02.994 17:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:14:03.252 true 00:14:03.252 17:32:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:03.509 [2024-07-15 17:32:59.229593] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:03.509 [2024-07-15 17:32:59.229649] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.509 [2024-07-15 17:32:59.229677] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x195b83e35180 00:14:03.509 [2024-07-15 17:32:59.229685] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.509 [2024-07-15 17:32:59.230359] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.509 [2024-07-15 17:32:59.230386] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:03.509 BaseBdev3 00:14:03.509 17:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:03.509 17:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:03.767 BaseBdev4_malloc 00:14:03.767 17:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:14:04.025 true 00:14:04.025 17:32:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:04.282 [2024-07-15 17:33:00.033604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:04.282 [2024-07-15 17:33:00.033663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:04.282 [2024-07-15 17:33:00.033692] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x195b83e35680 00:14:04.282 [2024-07-15 17:33:00.033700] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:04.282 [2024-07-15 17:33:00.034397] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:04.282 [2024-07-15 17:33:00.034431] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:04.282 BaseBdev4 00:14:04.282 17:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:14:04.540 [2024-07-15 17:33:00.305618] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:04.540 [2024-07-15 17:33:00.306195] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:04.540 [2024-07-15 17:33:00.306221] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:04.540 [2024-07-15 17:33:00.306235] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:04.540 [2024-07-15 17:33:00.306314] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x195b83e35900 00:14:04.540 [2024-07-15 17:33:00.306320] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:04.540 [2024-07-15 17:33:00.306360] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x195b83ea0e20 00:14:04.540 [2024-07-15 17:33:00.306441] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x195b83e35900 00:14:04.540 [2024-07-15 17:33:00.306445] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x195b83e35900 00:14:04.540 [2024-07-15 17:33:00.306472] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:04.540 17:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:04.540 17:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:04.540 17:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:04.540 17:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:04.540 17:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:04.540 17:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:04.540 17:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:04.540 17:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:04.540 17:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:04.540 17:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:04.540 17:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.540 17:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:04.798 17:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:04.798 "name": "raid_bdev1", 00:14:04.798 "uuid": "48290fd2-42d0-11ef-96ac-773515fba644", 00:14:04.798 "strip_size_kb": 64, 00:14:04.798 "state": "online", 00:14:04.798 "raid_level": "raid0", 00:14:04.798 "superblock": true, 00:14:04.798 "num_base_bdevs": 4, 00:14:04.798 "num_base_bdevs_discovered": 4, 00:14:04.798 "num_base_bdevs_operational": 4, 00:14:04.798 "base_bdevs_list": [ 00:14:04.798 { 00:14:04.798 "name": "BaseBdev1", 00:14:04.798 "uuid": "8b3ddbba-691b-ef52-a57e-8a3c9a686c17", 00:14:04.798 "is_configured": true, 00:14:04.798 "data_offset": 2048, 00:14:04.798 "data_size": 63488 00:14:04.798 }, 00:14:04.798 { 00:14:04.798 "name": "BaseBdev2", 00:14:04.798 "uuid": "398c733d-e840-b051-a757-ad67e57b5846", 00:14:04.798 "is_configured": true, 00:14:04.798 "data_offset": 2048, 00:14:04.798 "data_size": 63488 00:14:04.798 }, 00:14:04.798 { 00:14:04.798 "name": "BaseBdev3", 00:14:04.798 "uuid": "291f9594-8dab-6a5e-a97e-b5759797f52b", 00:14:04.798 "is_configured": true, 00:14:04.798 "data_offset": 2048, 00:14:04.798 "data_size": 63488 00:14:04.798 }, 00:14:04.798 { 00:14:04.798 "name": "BaseBdev4", 00:14:04.798 "uuid": "249613a1-eb72-4d50-bc34-ca09c7431f30", 00:14:04.798 "is_configured": true, 00:14:04.798 "data_offset": 2048, 00:14:04.798 "data_size": 63488 00:14:04.798 } 00:14:04.798 ] 00:14:04.798 }' 00:14:04.798 17:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:04.798 17:33:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.365 17:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:14:05.365 17:33:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:05.365 [2024-07-15 17:33:00.989815] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x195b83ea0ec0 00:14:06.299 17:33:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:06.557 17:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:14:06.557 17:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:06.557 17:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:14:06.557 17:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:06.557 17:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:06.557 17:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:06.557 17:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:06.557 17:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:06.557 17:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:06.557 17:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:06.557 17:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:06.557 17:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:06.557 17:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:06.557 17:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.557 17:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:06.815 17:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:06.815 "name": "raid_bdev1", 00:14:06.815 "uuid": "48290fd2-42d0-11ef-96ac-773515fba644", 00:14:06.815 "strip_size_kb": 64, 00:14:06.815 "state": "online", 00:14:06.815 "raid_level": "raid0", 00:14:06.815 "superblock": true, 00:14:06.815 "num_base_bdevs": 4, 00:14:06.815 "num_base_bdevs_discovered": 4, 00:14:06.815 "num_base_bdevs_operational": 4, 00:14:06.815 "base_bdevs_list": [ 00:14:06.815 { 00:14:06.815 "name": "BaseBdev1", 00:14:06.815 "uuid": "8b3ddbba-691b-ef52-a57e-8a3c9a686c17", 00:14:06.815 "is_configured": true, 00:14:06.815 "data_offset": 2048, 00:14:06.815 "data_size": 63488 00:14:06.815 }, 00:14:06.815 { 00:14:06.815 "name": "BaseBdev2", 00:14:06.815 "uuid": "398c733d-e840-b051-a757-ad67e57b5846", 00:14:06.815 "is_configured": true, 00:14:06.815 "data_offset": 2048, 00:14:06.815 "data_size": 63488 00:14:06.815 }, 00:14:06.815 { 00:14:06.815 "name": "BaseBdev3", 00:14:06.815 "uuid": "291f9594-8dab-6a5e-a97e-b5759797f52b", 00:14:06.815 "is_configured": true, 00:14:06.815 "data_offset": 2048, 00:14:06.815 "data_size": 63488 00:14:06.815 }, 00:14:06.815 { 00:14:06.815 "name": "BaseBdev4", 00:14:06.815 "uuid": "249613a1-eb72-4d50-bc34-ca09c7431f30", 00:14:06.815 "is_configured": true, 00:14:06.815 "data_offset": 2048, 00:14:06.815 "data_size": 63488 00:14:06.815 } 00:14:06.815 ] 00:14:06.815 }' 00:14:06.815 17:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:06.816 17:33:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.073 17:33:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:07.331 [2024-07-15 17:33:03.043850] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:07.331 [2024-07-15 17:33:03.043878] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:07.331 [2024-07-15 17:33:03.044236] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:07.331 [2024-07-15 17:33:03.044246] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.331 [2024-07-15 17:33:03.044255] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:07.331 [2024-07-15 17:33:03.044260] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x195b83e35900 name raid_bdev1, state offline 00:14:07.331 0 00:14:07.331 17:33:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 60543 00:14:07.331 17:33:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 60543 ']' 00:14:07.331 17:33:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 60543 00:14:07.331 17:33:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:14:07.331 17:33:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:14:07.331 17:33:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 60543 00:14:07.331 17:33:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:14:07.331 17:33:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:14:07.331 17:33:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:14:07.331 killing process with pid 60543 00:14:07.331 17:33:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60543' 00:14:07.331 17:33:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 60543 00:14:07.331 [2024-07-15 17:33:03.073472] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:07.331 17:33:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 60543 00:14:07.331 [2024-07-15 17:33:03.097219] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:07.590 17:33:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.Vvb4uWX2ez 00:14:07.590 17:33:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:14:07.590 17:33:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:14:07.590 17:33:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.49 00:14:07.590 17:33:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:14:07.590 17:33:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:07.590 17:33:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:07.590 17:33:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.49 != \0\.\0\0 ]] 00:14:07.590 00:14:07.590 real 0m7.373s 00:14:07.590 user 0m11.878s 00:14:07.590 sys 0m1.090s 00:14:07.590 ************************************ 00:14:07.590 END TEST raid_write_error_test 00:14:07.590 ************************************ 00:14:07.590 17:33:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:07.590 17:33:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.590 17:33:03 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:07.590 17:33:03 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:14:07.590 17:33:03 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:14:07.590 17:33:03 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:07.590 17:33:03 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:07.590 17:33:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:07.590 ************************************ 00:14:07.590 START TEST raid_state_function_test 00:14:07.590 ************************************ 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 false 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=60679 00:14:07.590 Process raid pid: 60679 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 60679' 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 60679 /var/tmp/spdk-raid.sock 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 60679 ']' 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:07.590 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:07.590 17:33:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:07.590 [2024-07-15 17:33:03.347840] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:14:07.590 [2024-07-15 17:33:03.348123] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:14:08.155 EAL: TSC is not safe to use in SMP mode 00:14:08.155 EAL: TSC is not invariant 00:14:08.155 [2024-07-15 17:33:03.889161] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.155 [2024-07-15 17:33:03.980172] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:14:08.155 [2024-07-15 17:33:03.982388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.155 [2024-07-15 17:33:03.983286] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:08.155 [2024-07-15 17:33:03.983302] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:08.720 17:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:08.720 17:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:14:08.720 17:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:08.978 [2024-07-15 17:33:04.588193] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:08.979 [2024-07-15 17:33:04.588244] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:08.979 [2024-07-15 17:33:04.588249] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:08.979 [2024-07-15 17:33:04.588258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:08.979 [2024-07-15 17:33:04.588262] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:08.979 [2024-07-15 17:33:04.588269] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:08.979 [2024-07-15 17:33:04.588272] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:08.979 [2024-07-15 17:33:04.588280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:08.979 17:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:08.979 17:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:08.979 17:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:08.979 17:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:08.979 17:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:08.979 17:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:08.979 17:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:08.979 17:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:08.979 17:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:08.979 17:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:08.979 17:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:08.979 17:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.237 17:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:09.237 "name": "Existed_Raid", 00:14:09.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.238 "strip_size_kb": 64, 00:14:09.238 "state": "configuring", 00:14:09.238 "raid_level": "concat", 00:14:09.238 "superblock": false, 00:14:09.238 "num_base_bdevs": 4, 00:14:09.238 "num_base_bdevs_discovered": 0, 00:14:09.238 "num_base_bdevs_operational": 4, 00:14:09.238 "base_bdevs_list": [ 00:14:09.238 { 00:14:09.238 "name": "BaseBdev1", 00:14:09.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.238 "is_configured": false, 00:14:09.238 "data_offset": 0, 00:14:09.238 "data_size": 0 00:14:09.238 }, 00:14:09.238 { 00:14:09.238 "name": "BaseBdev2", 00:14:09.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.238 "is_configured": false, 00:14:09.238 "data_offset": 0, 00:14:09.238 "data_size": 0 00:14:09.238 }, 00:14:09.238 { 00:14:09.238 "name": "BaseBdev3", 00:14:09.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.238 "is_configured": false, 00:14:09.238 "data_offset": 0, 00:14:09.238 "data_size": 0 00:14:09.238 }, 00:14:09.238 { 00:14:09.238 "name": "BaseBdev4", 00:14:09.238 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.238 "is_configured": false, 00:14:09.238 "data_offset": 0, 00:14:09.238 "data_size": 0 00:14:09.238 } 00:14:09.238 ] 00:14:09.238 }' 00:14:09.238 17:33:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:09.238 17:33:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:09.497 17:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:09.755 [2024-07-15 17:33:05.380229] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:09.755 [2024-07-15 17:33:05.380253] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x5a08cc34500 name Existed_Raid, state configuring 00:14:09.755 17:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:10.013 [2024-07-15 17:33:05.616233] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:10.013 [2024-07-15 17:33:05.616286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:10.013 [2024-07-15 17:33:05.616291] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:10.013 [2024-07-15 17:33:05.616300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:10.013 [2024-07-15 17:33:05.616303] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:10.013 [2024-07-15 17:33:05.616310] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:10.013 [2024-07-15 17:33:05.616314] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:10.013 [2024-07-15 17:33:05.616321] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:10.013 17:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:10.273 [2024-07-15 17:33:05.885283] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:10.273 BaseBdev1 00:14:10.273 17:33:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:10.273 17:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:10.273 17:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:10.273 17:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:10.273 17:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:10.273 17:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:10.273 17:33:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:10.531 17:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:10.789 [ 00:14:10.790 { 00:14:10.790 "name": "BaseBdev1", 00:14:10.790 "aliases": [ 00:14:10.790 "4b7c4baa-42d0-11ef-96ac-773515fba644" 00:14:10.790 ], 00:14:10.790 "product_name": "Malloc disk", 00:14:10.790 "block_size": 512, 00:14:10.790 "num_blocks": 65536, 00:14:10.790 "uuid": "4b7c4baa-42d0-11ef-96ac-773515fba644", 00:14:10.790 "assigned_rate_limits": { 00:14:10.790 "rw_ios_per_sec": 0, 00:14:10.790 "rw_mbytes_per_sec": 0, 00:14:10.790 "r_mbytes_per_sec": 0, 00:14:10.790 "w_mbytes_per_sec": 0 00:14:10.790 }, 00:14:10.790 "claimed": true, 00:14:10.790 "claim_type": "exclusive_write", 00:14:10.790 "zoned": false, 00:14:10.790 "supported_io_types": { 00:14:10.790 "read": true, 00:14:10.790 "write": true, 00:14:10.790 "unmap": true, 00:14:10.790 "flush": true, 00:14:10.790 "reset": true, 00:14:10.790 "nvme_admin": false, 00:14:10.790 "nvme_io": false, 00:14:10.790 "nvme_io_md": false, 00:14:10.790 "write_zeroes": true, 00:14:10.790 "zcopy": true, 00:14:10.790 "get_zone_info": false, 00:14:10.790 "zone_management": false, 00:14:10.790 "zone_append": false, 00:14:10.790 "compare": false, 00:14:10.790 "compare_and_write": false, 00:14:10.790 "abort": true, 00:14:10.790 "seek_hole": false, 00:14:10.790 "seek_data": false, 00:14:10.790 "copy": true, 00:14:10.790 "nvme_iov_md": false 00:14:10.790 }, 00:14:10.790 "memory_domains": [ 00:14:10.790 { 00:14:10.790 "dma_device_id": "system", 00:14:10.790 "dma_device_type": 1 00:14:10.790 }, 00:14:10.790 { 00:14:10.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:10.790 "dma_device_type": 2 00:14:10.790 } 00:14:10.790 ], 00:14:10.790 "driver_specific": {} 00:14:10.790 } 00:14:10.790 ] 00:14:10.790 17:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:10.790 17:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:10.790 17:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:10.790 17:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:10.790 17:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:10.790 17:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:10.790 17:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:10.790 17:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:10.790 17:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:10.790 17:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:10.790 17:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:10.790 17:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.790 17:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:11.049 17:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:11.049 "name": "Existed_Raid", 00:14:11.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.049 "strip_size_kb": 64, 00:14:11.049 "state": "configuring", 00:14:11.049 "raid_level": "concat", 00:14:11.049 "superblock": false, 00:14:11.049 "num_base_bdevs": 4, 00:14:11.049 "num_base_bdevs_discovered": 1, 00:14:11.049 "num_base_bdevs_operational": 4, 00:14:11.049 "base_bdevs_list": [ 00:14:11.049 { 00:14:11.049 "name": "BaseBdev1", 00:14:11.049 "uuid": "4b7c4baa-42d0-11ef-96ac-773515fba644", 00:14:11.049 "is_configured": true, 00:14:11.049 "data_offset": 0, 00:14:11.049 "data_size": 65536 00:14:11.049 }, 00:14:11.049 { 00:14:11.049 "name": "BaseBdev2", 00:14:11.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.049 "is_configured": false, 00:14:11.049 "data_offset": 0, 00:14:11.049 "data_size": 0 00:14:11.049 }, 00:14:11.049 { 00:14:11.049 "name": "BaseBdev3", 00:14:11.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.049 "is_configured": false, 00:14:11.049 "data_offset": 0, 00:14:11.049 "data_size": 0 00:14:11.049 }, 00:14:11.049 { 00:14:11.049 "name": "BaseBdev4", 00:14:11.049 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.049 "is_configured": false, 00:14:11.049 "data_offset": 0, 00:14:11.049 "data_size": 0 00:14:11.049 } 00:14:11.049 ] 00:14:11.049 }' 00:14:11.049 17:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:11.049 17:33:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.308 17:33:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:11.566 [2024-07-15 17:33:07.228259] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:11.566 [2024-07-15 17:33:07.228291] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x5a08cc34500 name Existed_Raid, state configuring 00:14:11.566 17:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:11.824 [2024-07-15 17:33:07.460279] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:11.824 [2024-07-15 17:33:07.461153] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:11.824 [2024-07-15 17:33:07.461191] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:11.824 [2024-07-15 17:33:07.461195] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:11.824 [2024-07-15 17:33:07.461221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:11.824 [2024-07-15 17:33:07.461224] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:11.824 [2024-07-15 17:33:07.461232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:11.824 17:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:11.824 17:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:11.824 17:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:11.824 17:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:11.824 17:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:11.824 17:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:11.824 17:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:11.824 17:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:11.825 17:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:11.825 17:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:11.825 17:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:11.825 17:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:11.825 17:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:11.825 17:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.083 17:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:12.083 "name": "Existed_Raid", 00:14:12.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.083 "strip_size_kb": 64, 00:14:12.083 "state": "configuring", 00:14:12.083 "raid_level": "concat", 00:14:12.083 "superblock": false, 00:14:12.083 "num_base_bdevs": 4, 00:14:12.083 "num_base_bdevs_discovered": 1, 00:14:12.083 "num_base_bdevs_operational": 4, 00:14:12.083 "base_bdevs_list": [ 00:14:12.083 { 00:14:12.083 "name": "BaseBdev1", 00:14:12.083 "uuid": "4b7c4baa-42d0-11ef-96ac-773515fba644", 00:14:12.083 "is_configured": true, 00:14:12.083 "data_offset": 0, 00:14:12.083 "data_size": 65536 00:14:12.083 }, 00:14:12.083 { 00:14:12.083 "name": "BaseBdev2", 00:14:12.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.083 "is_configured": false, 00:14:12.083 "data_offset": 0, 00:14:12.083 "data_size": 0 00:14:12.083 }, 00:14:12.083 { 00:14:12.083 "name": "BaseBdev3", 00:14:12.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.083 "is_configured": false, 00:14:12.083 "data_offset": 0, 00:14:12.083 "data_size": 0 00:14:12.083 }, 00:14:12.083 { 00:14:12.083 "name": "BaseBdev4", 00:14:12.083 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.083 "is_configured": false, 00:14:12.083 "data_offset": 0, 00:14:12.083 "data_size": 0 00:14:12.083 } 00:14:12.083 ] 00:14:12.083 }' 00:14:12.083 17:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:12.083 17:33:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.340 17:33:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:12.598 [2024-07-15 17:33:08.212417] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:12.598 BaseBdev2 00:14:12.598 17:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:12.598 17:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:12.598 17:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:12.598 17:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:12.598 17:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:12.598 17:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:12.598 17:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:12.856 17:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:13.115 [ 00:14:13.115 { 00:14:13.115 "name": "BaseBdev2", 00:14:13.115 "aliases": [ 00:14:13.115 "4cdf8660-42d0-11ef-96ac-773515fba644" 00:14:13.115 ], 00:14:13.115 "product_name": "Malloc disk", 00:14:13.115 "block_size": 512, 00:14:13.115 "num_blocks": 65536, 00:14:13.115 "uuid": "4cdf8660-42d0-11ef-96ac-773515fba644", 00:14:13.115 "assigned_rate_limits": { 00:14:13.115 "rw_ios_per_sec": 0, 00:14:13.115 "rw_mbytes_per_sec": 0, 00:14:13.115 "r_mbytes_per_sec": 0, 00:14:13.115 "w_mbytes_per_sec": 0 00:14:13.115 }, 00:14:13.115 "claimed": true, 00:14:13.115 "claim_type": "exclusive_write", 00:14:13.115 "zoned": false, 00:14:13.115 "supported_io_types": { 00:14:13.115 "read": true, 00:14:13.115 "write": true, 00:14:13.115 "unmap": true, 00:14:13.115 "flush": true, 00:14:13.115 "reset": true, 00:14:13.115 "nvme_admin": false, 00:14:13.115 "nvme_io": false, 00:14:13.115 "nvme_io_md": false, 00:14:13.115 "write_zeroes": true, 00:14:13.115 "zcopy": true, 00:14:13.115 "get_zone_info": false, 00:14:13.115 "zone_management": false, 00:14:13.115 "zone_append": false, 00:14:13.115 "compare": false, 00:14:13.115 "compare_and_write": false, 00:14:13.115 "abort": true, 00:14:13.115 "seek_hole": false, 00:14:13.115 "seek_data": false, 00:14:13.115 "copy": true, 00:14:13.115 "nvme_iov_md": false 00:14:13.115 }, 00:14:13.115 "memory_domains": [ 00:14:13.115 { 00:14:13.115 "dma_device_id": "system", 00:14:13.115 "dma_device_type": 1 00:14:13.115 }, 00:14:13.115 { 00:14:13.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.115 "dma_device_type": 2 00:14:13.115 } 00:14:13.115 ], 00:14:13.115 "driver_specific": {} 00:14:13.115 } 00:14:13.115 ] 00:14:13.115 17:33:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:13.115 17:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:13.115 17:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:13.115 17:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:13.115 17:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:13.115 17:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:13.115 17:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:13.115 17:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:13.115 17:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:13.115 17:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:13.115 17:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:13.115 17:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:13.115 17:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:13.115 17:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.115 17:33:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.373 17:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:13.373 "name": "Existed_Raid", 00:14:13.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.373 "strip_size_kb": 64, 00:14:13.373 "state": "configuring", 00:14:13.373 "raid_level": "concat", 00:14:13.373 "superblock": false, 00:14:13.373 "num_base_bdevs": 4, 00:14:13.373 "num_base_bdevs_discovered": 2, 00:14:13.373 "num_base_bdevs_operational": 4, 00:14:13.373 "base_bdevs_list": [ 00:14:13.373 { 00:14:13.373 "name": "BaseBdev1", 00:14:13.373 "uuid": "4b7c4baa-42d0-11ef-96ac-773515fba644", 00:14:13.373 "is_configured": true, 00:14:13.373 "data_offset": 0, 00:14:13.373 "data_size": 65536 00:14:13.373 }, 00:14:13.373 { 00:14:13.373 "name": "BaseBdev2", 00:14:13.373 "uuid": "4cdf8660-42d0-11ef-96ac-773515fba644", 00:14:13.373 "is_configured": true, 00:14:13.373 "data_offset": 0, 00:14:13.373 "data_size": 65536 00:14:13.373 }, 00:14:13.373 { 00:14:13.373 "name": "BaseBdev3", 00:14:13.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.373 "is_configured": false, 00:14:13.373 "data_offset": 0, 00:14:13.373 "data_size": 0 00:14:13.373 }, 00:14:13.373 { 00:14:13.373 "name": "BaseBdev4", 00:14:13.373 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.373 "is_configured": false, 00:14:13.373 "data_offset": 0, 00:14:13.373 "data_size": 0 00:14:13.373 } 00:14:13.373 ] 00:14:13.373 }' 00:14:13.373 17:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:13.373 17:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.631 17:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:13.888 [2024-07-15 17:33:09.540433] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:13.888 BaseBdev3 00:14:13.888 17:33:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:14:13.888 17:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:13.888 17:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:13.888 17:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:13.888 17:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:13.888 17:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:13.888 17:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:14.146 17:33:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:14.405 [ 00:14:14.405 { 00:14:14.405 "name": "BaseBdev3", 00:14:14.405 "aliases": [ 00:14:14.405 "4daa2aa7-42d0-11ef-96ac-773515fba644" 00:14:14.405 ], 00:14:14.405 "product_name": "Malloc disk", 00:14:14.405 "block_size": 512, 00:14:14.405 "num_blocks": 65536, 00:14:14.405 "uuid": "4daa2aa7-42d0-11ef-96ac-773515fba644", 00:14:14.405 "assigned_rate_limits": { 00:14:14.405 "rw_ios_per_sec": 0, 00:14:14.405 "rw_mbytes_per_sec": 0, 00:14:14.405 "r_mbytes_per_sec": 0, 00:14:14.405 "w_mbytes_per_sec": 0 00:14:14.405 }, 00:14:14.405 "claimed": true, 00:14:14.405 "claim_type": "exclusive_write", 00:14:14.405 "zoned": false, 00:14:14.405 "supported_io_types": { 00:14:14.405 "read": true, 00:14:14.405 "write": true, 00:14:14.405 "unmap": true, 00:14:14.405 "flush": true, 00:14:14.405 "reset": true, 00:14:14.405 "nvme_admin": false, 00:14:14.405 "nvme_io": false, 00:14:14.405 "nvme_io_md": false, 00:14:14.405 "write_zeroes": true, 00:14:14.405 "zcopy": true, 00:14:14.405 "get_zone_info": false, 00:14:14.405 "zone_management": false, 00:14:14.405 "zone_append": false, 00:14:14.405 "compare": false, 00:14:14.405 "compare_and_write": false, 00:14:14.405 "abort": true, 00:14:14.405 "seek_hole": false, 00:14:14.405 "seek_data": false, 00:14:14.405 "copy": true, 00:14:14.405 "nvme_iov_md": false 00:14:14.405 }, 00:14:14.405 "memory_domains": [ 00:14:14.405 { 00:14:14.405 "dma_device_id": "system", 00:14:14.405 "dma_device_type": 1 00:14:14.405 }, 00:14:14.405 { 00:14:14.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.405 "dma_device_type": 2 00:14:14.405 } 00:14:14.405 ], 00:14:14.405 "driver_specific": {} 00:14:14.405 } 00:14:14.405 ] 00:14:14.405 17:33:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:14.405 17:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:14.405 17:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:14.405 17:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:14.405 17:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:14.405 17:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:14.405 17:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:14.405 17:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:14.405 17:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:14.405 17:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:14.405 17:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:14.405 17:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:14.405 17:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:14.405 17:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.405 17:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.662 17:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:14.662 "name": "Existed_Raid", 00:14:14.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.662 "strip_size_kb": 64, 00:14:14.662 "state": "configuring", 00:14:14.662 "raid_level": "concat", 00:14:14.662 "superblock": false, 00:14:14.662 "num_base_bdevs": 4, 00:14:14.662 "num_base_bdevs_discovered": 3, 00:14:14.662 "num_base_bdevs_operational": 4, 00:14:14.662 "base_bdevs_list": [ 00:14:14.662 { 00:14:14.662 "name": "BaseBdev1", 00:14:14.662 "uuid": "4b7c4baa-42d0-11ef-96ac-773515fba644", 00:14:14.662 "is_configured": true, 00:14:14.662 "data_offset": 0, 00:14:14.662 "data_size": 65536 00:14:14.662 }, 00:14:14.662 { 00:14:14.662 "name": "BaseBdev2", 00:14:14.662 "uuid": "4cdf8660-42d0-11ef-96ac-773515fba644", 00:14:14.662 "is_configured": true, 00:14:14.662 "data_offset": 0, 00:14:14.662 "data_size": 65536 00:14:14.662 }, 00:14:14.662 { 00:14:14.662 "name": "BaseBdev3", 00:14:14.662 "uuid": "4daa2aa7-42d0-11ef-96ac-773515fba644", 00:14:14.662 "is_configured": true, 00:14:14.662 "data_offset": 0, 00:14:14.662 "data_size": 65536 00:14:14.662 }, 00:14:14.662 { 00:14:14.662 "name": "BaseBdev4", 00:14:14.662 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.662 "is_configured": false, 00:14:14.662 "data_offset": 0, 00:14:14.662 "data_size": 0 00:14:14.662 } 00:14:14.662 ] 00:14:14.662 }' 00:14:14.662 17:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:14.662 17:33:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.919 17:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:15.177 [2024-07-15 17:33:10.780494] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:15.177 [2024-07-15 17:33:10.780522] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x5a08cc34a00 00:14:15.177 [2024-07-15 17:33:10.780526] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:15.177 [2024-07-15 17:33:10.780569] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x5a08cc97e20 00:14:15.177 [2024-07-15 17:33:10.780675] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x5a08cc34a00 00:14:15.177 [2024-07-15 17:33:10.780683] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x5a08cc34a00 00:14:15.177 [2024-07-15 17:33:10.780729] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:15.177 BaseBdev4 00:14:15.177 17:33:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:14:15.177 17:33:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:14:15.177 17:33:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:15.177 17:33:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:15.177 17:33:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:15.177 17:33:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:15.177 17:33:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:15.433 17:33:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:15.691 [ 00:14:15.691 { 00:14:15.691 "name": "BaseBdev4", 00:14:15.691 "aliases": [ 00:14:15.691 "4e6761cd-42d0-11ef-96ac-773515fba644" 00:14:15.691 ], 00:14:15.691 "product_name": "Malloc disk", 00:14:15.691 "block_size": 512, 00:14:15.691 "num_blocks": 65536, 00:14:15.691 "uuid": "4e6761cd-42d0-11ef-96ac-773515fba644", 00:14:15.691 "assigned_rate_limits": { 00:14:15.691 "rw_ios_per_sec": 0, 00:14:15.691 "rw_mbytes_per_sec": 0, 00:14:15.691 "r_mbytes_per_sec": 0, 00:14:15.691 "w_mbytes_per_sec": 0 00:14:15.691 }, 00:14:15.691 "claimed": true, 00:14:15.691 "claim_type": "exclusive_write", 00:14:15.691 "zoned": false, 00:14:15.691 "supported_io_types": { 00:14:15.691 "read": true, 00:14:15.691 "write": true, 00:14:15.691 "unmap": true, 00:14:15.691 "flush": true, 00:14:15.691 "reset": true, 00:14:15.691 "nvme_admin": false, 00:14:15.691 "nvme_io": false, 00:14:15.691 "nvme_io_md": false, 00:14:15.691 "write_zeroes": true, 00:14:15.691 "zcopy": true, 00:14:15.691 "get_zone_info": false, 00:14:15.691 "zone_management": false, 00:14:15.691 "zone_append": false, 00:14:15.691 "compare": false, 00:14:15.691 "compare_and_write": false, 00:14:15.691 "abort": true, 00:14:15.691 "seek_hole": false, 00:14:15.691 "seek_data": false, 00:14:15.691 "copy": true, 00:14:15.691 "nvme_iov_md": false 00:14:15.691 }, 00:14:15.691 "memory_domains": [ 00:14:15.691 { 00:14:15.691 "dma_device_id": "system", 00:14:15.691 "dma_device_type": 1 00:14:15.691 }, 00:14:15.691 { 00:14:15.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.691 "dma_device_type": 2 00:14:15.691 } 00:14:15.691 ], 00:14:15.691 "driver_specific": {} 00:14:15.691 } 00:14:15.691 ] 00:14:15.691 17:33:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:15.691 17:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:15.691 17:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:15.691 17:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:15.691 17:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:15.691 17:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:15.691 17:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:15.691 17:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:15.691 17:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:15.691 17:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:15.691 17:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:15.691 17:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:15.691 17:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:15.691 17:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.691 17:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:15.691 17:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:15.691 "name": "Existed_Raid", 00:14:15.691 "uuid": "4e6768d4-42d0-11ef-96ac-773515fba644", 00:14:15.691 "strip_size_kb": 64, 00:14:15.691 "state": "online", 00:14:15.691 "raid_level": "concat", 00:14:15.691 "superblock": false, 00:14:15.691 "num_base_bdevs": 4, 00:14:15.691 "num_base_bdevs_discovered": 4, 00:14:15.691 "num_base_bdevs_operational": 4, 00:14:15.691 "base_bdevs_list": [ 00:14:15.691 { 00:14:15.691 "name": "BaseBdev1", 00:14:15.691 "uuid": "4b7c4baa-42d0-11ef-96ac-773515fba644", 00:14:15.691 "is_configured": true, 00:14:15.691 "data_offset": 0, 00:14:15.691 "data_size": 65536 00:14:15.691 }, 00:14:15.691 { 00:14:15.691 "name": "BaseBdev2", 00:14:15.691 "uuid": "4cdf8660-42d0-11ef-96ac-773515fba644", 00:14:15.691 "is_configured": true, 00:14:15.691 "data_offset": 0, 00:14:15.691 "data_size": 65536 00:14:15.691 }, 00:14:15.691 { 00:14:15.691 "name": "BaseBdev3", 00:14:15.691 "uuid": "4daa2aa7-42d0-11ef-96ac-773515fba644", 00:14:15.691 "is_configured": true, 00:14:15.691 "data_offset": 0, 00:14:15.691 "data_size": 65536 00:14:15.691 }, 00:14:15.691 { 00:14:15.691 "name": "BaseBdev4", 00:14:15.692 "uuid": "4e6761cd-42d0-11ef-96ac-773515fba644", 00:14:15.692 "is_configured": true, 00:14:15.692 "data_offset": 0, 00:14:15.692 "data_size": 65536 00:14:15.692 } 00:14:15.692 ] 00:14:15.692 }' 00:14:15.692 17:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:15.692 17:33:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.256 17:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:16.256 17:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:16.256 17:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:16.256 17:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:16.257 17:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:16.257 17:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:16.257 17:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:16.257 17:33:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:16.513 [2024-07-15 17:33:12.132453] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:16.513 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:16.513 "name": "Existed_Raid", 00:14:16.513 "aliases": [ 00:14:16.513 "4e6768d4-42d0-11ef-96ac-773515fba644" 00:14:16.513 ], 00:14:16.513 "product_name": "Raid Volume", 00:14:16.513 "block_size": 512, 00:14:16.513 "num_blocks": 262144, 00:14:16.513 "uuid": "4e6768d4-42d0-11ef-96ac-773515fba644", 00:14:16.513 "assigned_rate_limits": { 00:14:16.513 "rw_ios_per_sec": 0, 00:14:16.513 "rw_mbytes_per_sec": 0, 00:14:16.513 "r_mbytes_per_sec": 0, 00:14:16.513 "w_mbytes_per_sec": 0 00:14:16.514 }, 00:14:16.514 "claimed": false, 00:14:16.514 "zoned": false, 00:14:16.514 "supported_io_types": { 00:14:16.514 "read": true, 00:14:16.514 "write": true, 00:14:16.514 "unmap": true, 00:14:16.514 "flush": true, 00:14:16.514 "reset": true, 00:14:16.514 "nvme_admin": false, 00:14:16.514 "nvme_io": false, 00:14:16.514 "nvme_io_md": false, 00:14:16.514 "write_zeroes": true, 00:14:16.514 "zcopy": false, 00:14:16.514 "get_zone_info": false, 00:14:16.514 "zone_management": false, 00:14:16.514 "zone_append": false, 00:14:16.514 "compare": false, 00:14:16.514 "compare_and_write": false, 00:14:16.514 "abort": false, 00:14:16.514 "seek_hole": false, 00:14:16.514 "seek_data": false, 00:14:16.514 "copy": false, 00:14:16.514 "nvme_iov_md": false 00:14:16.514 }, 00:14:16.514 "memory_domains": [ 00:14:16.514 { 00:14:16.514 "dma_device_id": "system", 00:14:16.514 "dma_device_type": 1 00:14:16.514 }, 00:14:16.514 { 00:14:16.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.514 "dma_device_type": 2 00:14:16.514 }, 00:14:16.514 { 00:14:16.514 "dma_device_id": "system", 00:14:16.514 "dma_device_type": 1 00:14:16.514 }, 00:14:16.514 { 00:14:16.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.514 "dma_device_type": 2 00:14:16.514 }, 00:14:16.514 { 00:14:16.514 "dma_device_id": "system", 00:14:16.514 "dma_device_type": 1 00:14:16.514 }, 00:14:16.514 { 00:14:16.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.514 "dma_device_type": 2 00:14:16.514 }, 00:14:16.514 { 00:14:16.514 "dma_device_id": "system", 00:14:16.514 "dma_device_type": 1 00:14:16.514 }, 00:14:16.514 { 00:14:16.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.514 "dma_device_type": 2 00:14:16.514 } 00:14:16.514 ], 00:14:16.514 "driver_specific": { 00:14:16.514 "raid": { 00:14:16.514 "uuid": "4e6768d4-42d0-11ef-96ac-773515fba644", 00:14:16.514 "strip_size_kb": 64, 00:14:16.514 "state": "online", 00:14:16.514 "raid_level": "concat", 00:14:16.514 "superblock": false, 00:14:16.514 "num_base_bdevs": 4, 00:14:16.514 "num_base_bdevs_discovered": 4, 00:14:16.514 "num_base_bdevs_operational": 4, 00:14:16.514 "base_bdevs_list": [ 00:14:16.514 { 00:14:16.514 "name": "BaseBdev1", 00:14:16.514 "uuid": "4b7c4baa-42d0-11ef-96ac-773515fba644", 00:14:16.514 "is_configured": true, 00:14:16.514 "data_offset": 0, 00:14:16.514 "data_size": 65536 00:14:16.514 }, 00:14:16.514 { 00:14:16.514 "name": "BaseBdev2", 00:14:16.514 "uuid": "4cdf8660-42d0-11ef-96ac-773515fba644", 00:14:16.514 "is_configured": true, 00:14:16.514 "data_offset": 0, 00:14:16.514 "data_size": 65536 00:14:16.514 }, 00:14:16.514 { 00:14:16.514 "name": "BaseBdev3", 00:14:16.514 "uuid": "4daa2aa7-42d0-11ef-96ac-773515fba644", 00:14:16.514 "is_configured": true, 00:14:16.514 "data_offset": 0, 00:14:16.514 "data_size": 65536 00:14:16.514 }, 00:14:16.514 { 00:14:16.514 "name": "BaseBdev4", 00:14:16.514 "uuid": "4e6761cd-42d0-11ef-96ac-773515fba644", 00:14:16.514 "is_configured": true, 00:14:16.514 "data_offset": 0, 00:14:16.514 "data_size": 65536 00:14:16.514 } 00:14:16.514 ] 00:14:16.514 } 00:14:16.514 } 00:14:16.514 }' 00:14:16.514 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:16.514 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:16.514 BaseBdev2 00:14:16.514 BaseBdev3 00:14:16.514 BaseBdev4' 00:14:16.514 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:16.514 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:16.514 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:16.772 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:16.772 "name": "BaseBdev1", 00:14:16.772 "aliases": [ 00:14:16.772 "4b7c4baa-42d0-11ef-96ac-773515fba644" 00:14:16.772 ], 00:14:16.772 "product_name": "Malloc disk", 00:14:16.772 "block_size": 512, 00:14:16.772 "num_blocks": 65536, 00:14:16.772 "uuid": "4b7c4baa-42d0-11ef-96ac-773515fba644", 00:14:16.772 "assigned_rate_limits": { 00:14:16.772 "rw_ios_per_sec": 0, 00:14:16.772 "rw_mbytes_per_sec": 0, 00:14:16.772 "r_mbytes_per_sec": 0, 00:14:16.772 "w_mbytes_per_sec": 0 00:14:16.772 }, 00:14:16.772 "claimed": true, 00:14:16.772 "claim_type": "exclusive_write", 00:14:16.772 "zoned": false, 00:14:16.772 "supported_io_types": { 00:14:16.772 "read": true, 00:14:16.772 "write": true, 00:14:16.772 "unmap": true, 00:14:16.772 "flush": true, 00:14:16.772 "reset": true, 00:14:16.772 "nvme_admin": false, 00:14:16.772 "nvme_io": false, 00:14:16.772 "nvme_io_md": false, 00:14:16.772 "write_zeroes": true, 00:14:16.772 "zcopy": true, 00:14:16.772 "get_zone_info": false, 00:14:16.772 "zone_management": false, 00:14:16.772 "zone_append": false, 00:14:16.772 "compare": false, 00:14:16.772 "compare_and_write": false, 00:14:16.772 "abort": true, 00:14:16.772 "seek_hole": false, 00:14:16.772 "seek_data": false, 00:14:16.772 "copy": true, 00:14:16.772 "nvme_iov_md": false 00:14:16.772 }, 00:14:16.772 "memory_domains": [ 00:14:16.772 { 00:14:16.772 "dma_device_id": "system", 00:14:16.772 "dma_device_type": 1 00:14:16.772 }, 00:14:16.772 { 00:14:16.772 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.772 "dma_device_type": 2 00:14:16.772 } 00:14:16.772 ], 00:14:16.772 "driver_specific": {} 00:14:16.772 }' 00:14:16.772 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:16.772 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:16.772 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:16.772 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:16.772 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:16.772 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:16.772 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:16.772 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:16.772 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:16.772 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:16.772 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:16.772 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:16.772 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:16.772 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:16.772 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:17.029 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:17.029 "name": "BaseBdev2", 00:14:17.029 "aliases": [ 00:14:17.029 "4cdf8660-42d0-11ef-96ac-773515fba644" 00:14:17.029 ], 00:14:17.029 "product_name": "Malloc disk", 00:14:17.029 "block_size": 512, 00:14:17.029 "num_blocks": 65536, 00:14:17.030 "uuid": "4cdf8660-42d0-11ef-96ac-773515fba644", 00:14:17.030 "assigned_rate_limits": { 00:14:17.030 "rw_ios_per_sec": 0, 00:14:17.030 "rw_mbytes_per_sec": 0, 00:14:17.030 "r_mbytes_per_sec": 0, 00:14:17.030 "w_mbytes_per_sec": 0 00:14:17.030 }, 00:14:17.030 "claimed": true, 00:14:17.030 "claim_type": "exclusive_write", 00:14:17.030 "zoned": false, 00:14:17.030 "supported_io_types": { 00:14:17.030 "read": true, 00:14:17.030 "write": true, 00:14:17.030 "unmap": true, 00:14:17.030 "flush": true, 00:14:17.030 "reset": true, 00:14:17.030 "nvme_admin": false, 00:14:17.030 "nvme_io": false, 00:14:17.030 "nvme_io_md": false, 00:14:17.030 "write_zeroes": true, 00:14:17.030 "zcopy": true, 00:14:17.030 "get_zone_info": false, 00:14:17.030 "zone_management": false, 00:14:17.030 "zone_append": false, 00:14:17.030 "compare": false, 00:14:17.030 "compare_and_write": false, 00:14:17.030 "abort": true, 00:14:17.030 "seek_hole": false, 00:14:17.030 "seek_data": false, 00:14:17.030 "copy": true, 00:14:17.030 "nvme_iov_md": false 00:14:17.030 }, 00:14:17.030 "memory_domains": [ 00:14:17.030 { 00:14:17.030 "dma_device_id": "system", 00:14:17.030 "dma_device_type": 1 00:14:17.030 }, 00:14:17.030 { 00:14:17.030 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.030 "dma_device_type": 2 00:14:17.030 } 00:14:17.030 ], 00:14:17.030 "driver_specific": {} 00:14:17.030 }' 00:14:17.030 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:17.030 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:17.030 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:17.030 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:17.030 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:17.030 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:17.030 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:17.030 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:17.030 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:17.030 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:17.030 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:17.030 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:17.030 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:17.030 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:17.030 17:33:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:17.287 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:17.287 "name": "BaseBdev3", 00:14:17.287 "aliases": [ 00:14:17.287 "4daa2aa7-42d0-11ef-96ac-773515fba644" 00:14:17.287 ], 00:14:17.287 "product_name": "Malloc disk", 00:14:17.287 "block_size": 512, 00:14:17.287 "num_blocks": 65536, 00:14:17.287 "uuid": "4daa2aa7-42d0-11ef-96ac-773515fba644", 00:14:17.287 "assigned_rate_limits": { 00:14:17.287 "rw_ios_per_sec": 0, 00:14:17.287 "rw_mbytes_per_sec": 0, 00:14:17.287 "r_mbytes_per_sec": 0, 00:14:17.287 "w_mbytes_per_sec": 0 00:14:17.287 }, 00:14:17.287 "claimed": true, 00:14:17.287 "claim_type": "exclusive_write", 00:14:17.287 "zoned": false, 00:14:17.287 "supported_io_types": { 00:14:17.287 "read": true, 00:14:17.287 "write": true, 00:14:17.287 "unmap": true, 00:14:17.287 "flush": true, 00:14:17.287 "reset": true, 00:14:17.287 "nvme_admin": false, 00:14:17.287 "nvme_io": false, 00:14:17.287 "nvme_io_md": false, 00:14:17.287 "write_zeroes": true, 00:14:17.287 "zcopy": true, 00:14:17.287 "get_zone_info": false, 00:14:17.287 "zone_management": false, 00:14:17.287 "zone_append": false, 00:14:17.287 "compare": false, 00:14:17.287 "compare_and_write": false, 00:14:17.287 "abort": true, 00:14:17.287 "seek_hole": false, 00:14:17.287 "seek_data": false, 00:14:17.287 "copy": true, 00:14:17.287 "nvme_iov_md": false 00:14:17.287 }, 00:14:17.287 "memory_domains": [ 00:14:17.287 { 00:14:17.287 "dma_device_id": "system", 00:14:17.287 "dma_device_type": 1 00:14:17.287 }, 00:14:17.287 { 00:14:17.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.287 "dma_device_type": 2 00:14:17.287 } 00:14:17.287 ], 00:14:17.287 "driver_specific": {} 00:14:17.287 }' 00:14:17.287 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:17.287 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:17.287 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:17.287 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:17.287 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:17.287 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:17.287 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:17.287 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:17.287 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:17.287 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:17.287 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:17.287 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:17.287 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:17.287 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:14:17.287 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:17.544 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:17.544 "name": "BaseBdev4", 00:14:17.544 "aliases": [ 00:14:17.544 "4e6761cd-42d0-11ef-96ac-773515fba644" 00:14:17.544 ], 00:14:17.544 "product_name": "Malloc disk", 00:14:17.544 "block_size": 512, 00:14:17.544 "num_blocks": 65536, 00:14:17.544 "uuid": "4e6761cd-42d0-11ef-96ac-773515fba644", 00:14:17.544 "assigned_rate_limits": { 00:14:17.544 "rw_ios_per_sec": 0, 00:14:17.544 "rw_mbytes_per_sec": 0, 00:14:17.544 "r_mbytes_per_sec": 0, 00:14:17.544 "w_mbytes_per_sec": 0 00:14:17.544 }, 00:14:17.544 "claimed": true, 00:14:17.544 "claim_type": "exclusive_write", 00:14:17.544 "zoned": false, 00:14:17.544 "supported_io_types": { 00:14:17.544 "read": true, 00:14:17.544 "write": true, 00:14:17.544 "unmap": true, 00:14:17.544 "flush": true, 00:14:17.544 "reset": true, 00:14:17.544 "nvme_admin": false, 00:14:17.544 "nvme_io": false, 00:14:17.544 "nvme_io_md": false, 00:14:17.544 "write_zeroes": true, 00:14:17.544 "zcopy": true, 00:14:17.544 "get_zone_info": false, 00:14:17.544 "zone_management": false, 00:14:17.544 "zone_append": false, 00:14:17.544 "compare": false, 00:14:17.544 "compare_and_write": false, 00:14:17.544 "abort": true, 00:14:17.544 "seek_hole": false, 00:14:17.544 "seek_data": false, 00:14:17.544 "copy": true, 00:14:17.544 "nvme_iov_md": false 00:14:17.544 }, 00:14:17.544 "memory_domains": [ 00:14:17.544 { 00:14:17.544 "dma_device_id": "system", 00:14:17.544 "dma_device_type": 1 00:14:17.544 }, 00:14:17.544 { 00:14:17.544 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:17.544 "dma_device_type": 2 00:14:17.544 } 00:14:17.544 ], 00:14:17.544 "driver_specific": {} 00:14:17.544 }' 00:14:17.544 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:17.544 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:17.544 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:17.544 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:17.544 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:17.544 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:17.544 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:17.818 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:17.818 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:17.818 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:17.818 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:17.818 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:17.818 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:18.075 [2024-07-15 17:33:13.668418] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:18.076 [2024-07-15 17:33:13.668447] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:18.076 [2024-07-15 17:33:13.668462] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:18.076 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:18.076 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:14:18.076 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:18.076 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:18.076 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:18.076 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:14:18.076 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:18.076 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:18.076 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:18.076 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:18.076 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:18.076 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:18.076 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:18.076 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:18.076 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:18.076 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:18.076 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:18.333 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:18.333 "name": "Existed_Raid", 00:14:18.333 "uuid": "4e6768d4-42d0-11ef-96ac-773515fba644", 00:14:18.333 "strip_size_kb": 64, 00:14:18.333 "state": "offline", 00:14:18.333 "raid_level": "concat", 00:14:18.333 "superblock": false, 00:14:18.333 "num_base_bdevs": 4, 00:14:18.333 "num_base_bdevs_discovered": 3, 00:14:18.333 "num_base_bdevs_operational": 3, 00:14:18.333 "base_bdevs_list": [ 00:14:18.333 { 00:14:18.333 "name": null, 00:14:18.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:18.333 "is_configured": false, 00:14:18.333 "data_offset": 0, 00:14:18.333 "data_size": 65536 00:14:18.333 }, 00:14:18.333 { 00:14:18.333 "name": "BaseBdev2", 00:14:18.333 "uuid": "4cdf8660-42d0-11ef-96ac-773515fba644", 00:14:18.333 "is_configured": true, 00:14:18.333 "data_offset": 0, 00:14:18.333 "data_size": 65536 00:14:18.333 }, 00:14:18.333 { 00:14:18.333 "name": "BaseBdev3", 00:14:18.333 "uuid": "4daa2aa7-42d0-11ef-96ac-773515fba644", 00:14:18.333 "is_configured": true, 00:14:18.333 "data_offset": 0, 00:14:18.333 "data_size": 65536 00:14:18.333 }, 00:14:18.333 { 00:14:18.333 "name": "BaseBdev4", 00:14:18.333 "uuid": "4e6761cd-42d0-11ef-96ac-773515fba644", 00:14:18.333 "is_configured": true, 00:14:18.333 "data_offset": 0, 00:14:18.333 "data_size": 65536 00:14:18.333 } 00:14:18.333 ] 00:14:18.333 }' 00:14:18.333 17:33:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:18.333 17:33:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.591 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:18.591 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:18.591 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:18.591 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:18.849 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:18.849 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:18.849 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:19.107 [2024-07-15 17:33:14.794501] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:19.107 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:19.107 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:19.107 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.107 17:33:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:19.365 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:19.365 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:19.365 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:19.623 [2024-07-15 17:33:15.332293] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:19.623 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:19.623 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:19.623 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.623 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:19.881 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:19.881 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:19.881 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:14:20.138 [2024-07-15 17:33:15.850130] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:20.138 [2024-07-15 17:33:15.850155] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x5a08cc34a00 name Existed_Raid, state offline 00:14:20.138 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:20.138 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:20.138 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.138 17:33:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:20.397 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:20.397 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:20.397 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:14:20.397 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:14:20.397 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:20.397 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:20.655 BaseBdev2 00:14:20.655 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:14:20.655 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:20.655 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:20.655 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:20.655 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:20.655 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:20.655 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:20.912 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:21.171 [ 00:14:21.171 { 00:14:21.171 "name": "BaseBdev2", 00:14:21.171 "aliases": [ 00:14:21.171 "51bac49b-42d0-11ef-96ac-773515fba644" 00:14:21.171 ], 00:14:21.171 "product_name": "Malloc disk", 00:14:21.171 "block_size": 512, 00:14:21.171 "num_blocks": 65536, 00:14:21.171 "uuid": "51bac49b-42d0-11ef-96ac-773515fba644", 00:14:21.171 "assigned_rate_limits": { 00:14:21.171 "rw_ios_per_sec": 0, 00:14:21.171 "rw_mbytes_per_sec": 0, 00:14:21.171 "r_mbytes_per_sec": 0, 00:14:21.171 "w_mbytes_per_sec": 0 00:14:21.171 }, 00:14:21.171 "claimed": false, 00:14:21.171 "zoned": false, 00:14:21.171 "supported_io_types": { 00:14:21.171 "read": true, 00:14:21.171 "write": true, 00:14:21.171 "unmap": true, 00:14:21.171 "flush": true, 00:14:21.171 "reset": true, 00:14:21.171 "nvme_admin": false, 00:14:21.171 "nvme_io": false, 00:14:21.171 "nvme_io_md": false, 00:14:21.171 "write_zeroes": true, 00:14:21.171 "zcopy": true, 00:14:21.171 "get_zone_info": false, 00:14:21.171 "zone_management": false, 00:14:21.171 "zone_append": false, 00:14:21.171 "compare": false, 00:14:21.171 "compare_and_write": false, 00:14:21.171 "abort": true, 00:14:21.171 "seek_hole": false, 00:14:21.171 "seek_data": false, 00:14:21.171 "copy": true, 00:14:21.171 "nvme_iov_md": false 00:14:21.171 }, 00:14:21.171 "memory_domains": [ 00:14:21.171 { 00:14:21.171 "dma_device_id": "system", 00:14:21.171 "dma_device_type": 1 00:14:21.171 }, 00:14:21.171 { 00:14:21.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.171 "dma_device_type": 2 00:14:21.171 } 00:14:21.171 ], 00:14:21.171 "driver_specific": {} 00:14:21.171 } 00:14:21.171 ] 00:14:21.171 17:33:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:21.171 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:21.171 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:21.171 17:33:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:21.429 BaseBdev3 00:14:21.430 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:14:21.430 17:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:21.430 17:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:21.430 17:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:21.430 17:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:21.430 17:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:21.430 17:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:21.688 17:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:21.945 [ 00:14:21.945 { 00:14:21.945 "name": "BaseBdev3", 00:14:21.945 "aliases": [ 00:14:21.945 "522b133d-42d0-11ef-96ac-773515fba644" 00:14:21.945 ], 00:14:21.945 "product_name": "Malloc disk", 00:14:21.945 "block_size": 512, 00:14:21.945 "num_blocks": 65536, 00:14:21.945 "uuid": "522b133d-42d0-11ef-96ac-773515fba644", 00:14:21.945 "assigned_rate_limits": { 00:14:21.945 "rw_ios_per_sec": 0, 00:14:21.945 "rw_mbytes_per_sec": 0, 00:14:21.945 "r_mbytes_per_sec": 0, 00:14:21.945 "w_mbytes_per_sec": 0 00:14:21.945 }, 00:14:21.945 "claimed": false, 00:14:21.946 "zoned": false, 00:14:21.946 "supported_io_types": { 00:14:21.946 "read": true, 00:14:21.946 "write": true, 00:14:21.946 "unmap": true, 00:14:21.946 "flush": true, 00:14:21.946 "reset": true, 00:14:21.946 "nvme_admin": false, 00:14:21.946 "nvme_io": false, 00:14:21.946 "nvme_io_md": false, 00:14:21.946 "write_zeroes": true, 00:14:21.946 "zcopy": true, 00:14:21.946 "get_zone_info": false, 00:14:21.946 "zone_management": false, 00:14:21.946 "zone_append": false, 00:14:21.946 "compare": false, 00:14:21.946 "compare_and_write": false, 00:14:21.946 "abort": true, 00:14:21.946 "seek_hole": false, 00:14:21.946 "seek_data": false, 00:14:21.946 "copy": true, 00:14:21.946 "nvme_iov_md": false 00:14:21.946 }, 00:14:21.946 "memory_domains": [ 00:14:21.946 { 00:14:21.946 "dma_device_id": "system", 00:14:21.946 "dma_device_type": 1 00:14:21.946 }, 00:14:21.946 { 00:14:21.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.946 "dma_device_type": 2 00:14:21.946 } 00:14:21.946 ], 00:14:21.946 "driver_specific": {} 00:14:21.946 } 00:14:21.946 ] 00:14:21.946 17:33:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:21.946 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:21.946 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:21.946 17:33:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:22.204 BaseBdev4 00:14:22.204 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:14:22.204 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:14:22.204 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:22.204 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:22.204 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:22.204 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:22.204 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:22.461 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:22.732 [ 00:14:22.732 { 00:14:22.732 "name": "BaseBdev4", 00:14:22.732 "aliases": [ 00:14:22.732 "52b3cb93-42d0-11ef-96ac-773515fba644" 00:14:22.732 ], 00:14:22.732 "product_name": "Malloc disk", 00:14:22.732 "block_size": 512, 00:14:22.732 "num_blocks": 65536, 00:14:22.732 "uuid": "52b3cb93-42d0-11ef-96ac-773515fba644", 00:14:22.732 "assigned_rate_limits": { 00:14:22.732 "rw_ios_per_sec": 0, 00:14:22.732 "rw_mbytes_per_sec": 0, 00:14:22.732 "r_mbytes_per_sec": 0, 00:14:22.732 "w_mbytes_per_sec": 0 00:14:22.732 }, 00:14:22.732 "claimed": false, 00:14:22.732 "zoned": false, 00:14:22.732 "supported_io_types": { 00:14:22.732 "read": true, 00:14:22.732 "write": true, 00:14:22.732 "unmap": true, 00:14:22.732 "flush": true, 00:14:22.732 "reset": true, 00:14:22.732 "nvme_admin": false, 00:14:22.732 "nvme_io": false, 00:14:22.732 "nvme_io_md": false, 00:14:22.732 "write_zeroes": true, 00:14:22.732 "zcopy": true, 00:14:22.732 "get_zone_info": false, 00:14:22.732 "zone_management": false, 00:14:22.732 "zone_append": false, 00:14:22.732 "compare": false, 00:14:22.732 "compare_and_write": false, 00:14:22.732 "abort": true, 00:14:22.732 "seek_hole": false, 00:14:22.732 "seek_data": false, 00:14:22.732 "copy": true, 00:14:22.732 "nvme_iov_md": false 00:14:22.732 }, 00:14:22.732 "memory_domains": [ 00:14:22.732 { 00:14:22.732 "dma_device_id": "system", 00:14:22.732 "dma_device_type": 1 00:14:22.732 }, 00:14:22.732 { 00:14:22.732 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:22.732 "dma_device_type": 2 00:14:22.732 } 00:14:22.732 ], 00:14:22.732 "driver_specific": {} 00:14:22.732 } 00:14:22.732 ] 00:14:22.732 17:33:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:22.732 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:22.732 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:22.732 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:22.991 [2024-07-15 17:33:18.764038] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:22.991 [2024-07-15 17:33:18.764092] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:22.991 [2024-07-15 17:33:18.764100] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:22.991 [2024-07-15 17:33:18.764664] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:22.991 [2024-07-15 17:33:18.764682] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:22.991 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:22.991 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:22.991 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:22.991 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:22.991 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:22.991 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:22.991 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:22.991 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:22.991 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:22.991 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:22.991 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.991 17:33:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:23.249 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:23.249 "name": "Existed_Raid", 00:14:23.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.249 "strip_size_kb": 64, 00:14:23.249 "state": "configuring", 00:14:23.249 "raid_level": "concat", 00:14:23.249 "superblock": false, 00:14:23.249 "num_base_bdevs": 4, 00:14:23.249 "num_base_bdevs_discovered": 3, 00:14:23.249 "num_base_bdevs_operational": 4, 00:14:23.249 "base_bdevs_list": [ 00:14:23.249 { 00:14:23.249 "name": "BaseBdev1", 00:14:23.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:23.249 "is_configured": false, 00:14:23.249 "data_offset": 0, 00:14:23.249 "data_size": 0 00:14:23.249 }, 00:14:23.249 { 00:14:23.249 "name": "BaseBdev2", 00:14:23.249 "uuid": "51bac49b-42d0-11ef-96ac-773515fba644", 00:14:23.249 "is_configured": true, 00:14:23.250 "data_offset": 0, 00:14:23.250 "data_size": 65536 00:14:23.250 }, 00:14:23.250 { 00:14:23.250 "name": "BaseBdev3", 00:14:23.250 "uuid": "522b133d-42d0-11ef-96ac-773515fba644", 00:14:23.250 "is_configured": true, 00:14:23.250 "data_offset": 0, 00:14:23.250 "data_size": 65536 00:14:23.250 }, 00:14:23.250 { 00:14:23.250 "name": "BaseBdev4", 00:14:23.250 "uuid": "52b3cb93-42d0-11ef-96ac-773515fba644", 00:14:23.250 "is_configured": true, 00:14:23.250 "data_offset": 0, 00:14:23.250 "data_size": 65536 00:14:23.250 } 00:14:23.250 ] 00:14:23.250 }' 00:14:23.250 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:23.250 17:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.816 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:23.816 [2024-07-15 17:33:19.604080] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:23.816 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:23.816 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:23.816 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:23.816 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:23.816 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:23.816 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:23.816 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:23.816 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:23.816 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:23.816 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:23.816 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:23.816 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.381 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:24.381 "name": "Existed_Raid", 00:14:24.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.381 "strip_size_kb": 64, 00:14:24.381 "state": "configuring", 00:14:24.381 "raid_level": "concat", 00:14:24.381 "superblock": false, 00:14:24.381 "num_base_bdevs": 4, 00:14:24.381 "num_base_bdevs_discovered": 2, 00:14:24.381 "num_base_bdevs_operational": 4, 00:14:24.381 "base_bdevs_list": [ 00:14:24.381 { 00:14:24.381 "name": "BaseBdev1", 00:14:24.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.381 "is_configured": false, 00:14:24.381 "data_offset": 0, 00:14:24.381 "data_size": 0 00:14:24.381 }, 00:14:24.381 { 00:14:24.381 "name": null, 00:14:24.381 "uuid": "51bac49b-42d0-11ef-96ac-773515fba644", 00:14:24.381 "is_configured": false, 00:14:24.381 "data_offset": 0, 00:14:24.381 "data_size": 65536 00:14:24.381 }, 00:14:24.381 { 00:14:24.381 "name": "BaseBdev3", 00:14:24.381 "uuid": "522b133d-42d0-11ef-96ac-773515fba644", 00:14:24.381 "is_configured": true, 00:14:24.381 "data_offset": 0, 00:14:24.381 "data_size": 65536 00:14:24.381 }, 00:14:24.381 { 00:14:24.381 "name": "BaseBdev4", 00:14:24.381 "uuid": "52b3cb93-42d0-11ef-96ac-773515fba644", 00:14:24.381 "is_configured": true, 00:14:24.381 "data_offset": 0, 00:14:24.381 "data_size": 65536 00:14:24.381 } 00:14:24.381 ] 00:14:24.381 }' 00:14:24.381 17:33:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:24.381 17:33:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.638 17:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.638 17:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:24.896 17:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:14:24.896 17:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:25.155 [2024-07-15 17:33:20.784331] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:25.155 BaseBdev1 00:14:25.155 17:33:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:14:25.155 17:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:25.155 17:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:25.155 17:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:25.155 17:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:25.155 17:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:25.155 17:33:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:25.413 17:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:25.671 [ 00:14:25.671 { 00:14:25.671 "name": "BaseBdev1", 00:14:25.671 "aliases": [ 00:14:25.671 "545dd950-42d0-11ef-96ac-773515fba644" 00:14:25.671 ], 00:14:25.671 "product_name": "Malloc disk", 00:14:25.671 "block_size": 512, 00:14:25.671 "num_blocks": 65536, 00:14:25.671 "uuid": "545dd950-42d0-11ef-96ac-773515fba644", 00:14:25.671 "assigned_rate_limits": { 00:14:25.671 "rw_ios_per_sec": 0, 00:14:25.671 "rw_mbytes_per_sec": 0, 00:14:25.671 "r_mbytes_per_sec": 0, 00:14:25.671 "w_mbytes_per_sec": 0 00:14:25.672 }, 00:14:25.672 "claimed": true, 00:14:25.672 "claim_type": "exclusive_write", 00:14:25.672 "zoned": false, 00:14:25.672 "supported_io_types": { 00:14:25.672 "read": true, 00:14:25.672 "write": true, 00:14:25.672 "unmap": true, 00:14:25.672 "flush": true, 00:14:25.672 "reset": true, 00:14:25.672 "nvme_admin": false, 00:14:25.672 "nvme_io": false, 00:14:25.672 "nvme_io_md": false, 00:14:25.672 "write_zeroes": true, 00:14:25.672 "zcopy": true, 00:14:25.672 "get_zone_info": false, 00:14:25.672 "zone_management": false, 00:14:25.672 "zone_append": false, 00:14:25.672 "compare": false, 00:14:25.672 "compare_and_write": false, 00:14:25.672 "abort": true, 00:14:25.672 "seek_hole": false, 00:14:25.672 "seek_data": false, 00:14:25.672 "copy": true, 00:14:25.672 "nvme_iov_md": false 00:14:25.672 }, 00:14:25.672 "memory_domains": [ 00:14:25.672 { 00:14:25.672 "dma_device_id": "system", 00:14:25.672 "dma_device_type": 1 00:14:25.672 }, 00:14:25.672 { 00:14:25.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:25.672 "dma_device_type": 2 00:14:25.672 } 00:14:25.672 ], 00:14:25.672 "driver_specific": {} 00:14:25.672 } 00:14:25.672 ] 00:14:25.672 17:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:25.672 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:25.672 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:25.672 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:25.672 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:25.672 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:25.672 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:25.672 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:25.672 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:25.672 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:25.672 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:25.672 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.672 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.951 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:25.951 "name": "Existed_Raid", 00:14:25.951 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.951 "strip_size_kb": 64, 00:14:25.951 "state": "configuring", 00:14:25.951 "raid_level": "concat", 00:14:25.951 "superblock": false, 00:14:25.951 "num_base_bdevs": 4, 00:14:25.951 "num_base_bdevs_discovered": 3, 00:14:25.951 "num_base_bdevs_operational": 4, 00:14:25.951 "base_bdevs_list": [ 00:14:25.951 { 00:14:25.951 "name": "BaseBdev1", 00:14:25.951 "uuid": "545dd950-42d0-11ef-96ac-773515fba644", 00:14:25.951 "is_configured": true, 00:14:25.951 "data_offset": 0, 00:14:25.951 "data_size": 65536 00:14:25.951 }, 00:14:25.951 { 00:14:25.951 "name": null, 00:14:25.951 "uuid": "51bac49b-42d0-11ef-96ac-773515fba644", 00:14:25.951 "is_configured": false, 00:14:25.951 "data_offset": 0, 00:14:25.951 "data_size": 65536 00:14:25.951 }, 00:14:25.951 { 00:14:25.951 "name": "BaseBdev3", 00:14:25.951 "uuid": "522b133d-42d0-11ef-96ac-773515fba644", 00:14:25.951 "is_configured": true, 00:14:25.951 "data_offset": 0, 00:14:25.951 "data_size": 65536 00:14:25.951 }, 00:14:25.951 { 00:14:25.951 "name": "BaseBdev4", 00:14:25.951 "uuid": "52b3cb93-42d0-11ef-96ac-773515fba644", 00:14:25.951 "is_configured": true, 00:14:25.951 "data_offset": 0, 00:14:25.951 "data_size": 65536 00:14:25.951 } 00:14:25.951 ] 00:14:25.951 }' 00:14:25.951 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:25.951 17:33:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.237 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:26.237 17:33:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:26.495 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:14:26.495 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:26.753 [2024-07-15 17:33:22.416235] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:26.753 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:26.753 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:26.753 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:26.753 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:26.753 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:26.753 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:26.753 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:26.753 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:26.753 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:26.753 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:26.753 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:26.753 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.011 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:27.011 "name": "Existed_Raid", 00:14:27.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.011 "strip_size_kb": 64, 00:14:27.011 "state": "configuring", 00:14:27.011 "raid_level": "concat", 00:14:27.011 "superblock": false, 00:14:27.011 "num_base_bdevs": 4, 00:14:27.011 "num_base_bdevs_discovered": 2, 00:14:27.011 "num_base_bdevs_operational": 4, 00:14:27.011 "base_bdevs_list": [ 00:14:27.011 { 00:14:27.011 "name": "BaseBdev1", 00:14:27.011 "uuid": "545dd950-42d0-11ef-96ac-773515fba644", 00:14:27.011 "is_configured": true, 00:14:27.011 "data_offset": 0, 00:14:27.011 "data_size": 65536 00:14:27.011 }, 00:14:27.011 { 00:14:27.011 "name": null, 00:14:27.011 "uuid": "51bac49b-42d0-11ef-96ac-773515fba644", 00:14:27.011 "is_configured": false, 00:14:27.011 "data_offset": 0, 00:14:27.011 "data_size": 65536 00:14:27.011 }, 00:14:27.011 { 00:14:27.011 "name": null, 00:14:27.011 "uuid": "522b133d-42d0-11ef-96ac-773515fba644", 00:14:27.011 "is_configured": false, 00:14:27.011 "data_offset": 0, 00:14:27.011 "data_size": 65536 00:14:27.011 }, 00:14:27.011 { 00:14:27.011 "name": "BaseBdev4", 00:14:27.011 "uuid": "52b3cb93-42d0-11ef-96ac-773515fba644", 00:14:27.011 "is_configured": true, 00:14:27.011 "data_offset": 0, 00:14:27.011 "data_size": 65536 00:14:27.011 } 00:14:27.011 ] 00:14:27.011 }' 00:14:27.011 17:33:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:27.011 17:33:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.269 17:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.269 17:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:27.527 17:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:14:27.527 17:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:27.785 [2024-07-15 17:33:23.504266] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:27.785 17:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:27.785 17:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:27.785 17:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:27.785 17:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:27.785 17:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:27.785 17:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:27.785 17:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:27.785 17:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:27.785 17:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:27.785 17:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:27.785 17:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.785 17:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.042 17:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:28.042 "name": "Existed_Raid", 00:14:28.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.042 "strip_size_kb": 64, 00:14:28.042 "state": "configuring", 00:14:28.042 "raid_level": "concat", 00:14:28.042 "superblock": false, 00:14:28.042 "num_base_bdevs": 4, 00:14:28.042 "num_base_bdevs_discovered": 3, 00:14:28.042 "num_base_bdevs_operational": 4, 00:14:28.042 "base_bdevs_list": [ 00:14:28.042 { 00:14:28.042 "name": "BaseBdev1", 00:14:28.042 "uuid": "545dd950-42d0-11ef-96ac-773515fba644", 00:14:28.042 "is_configured": true, 00:14:28.042 "data_offset": 0, 00:14:28.042 "data_size": 65536 00:14:28.042 }, 00:14:28.042 { 00:14:28.042 "name": null, 00:14:28.042 "uuid": "51bac49b-42d0-11ef-96ac-773515fba644", 00:14:28.042 "is_configured": false, 00:14:28.042 "data_offset": 0, 00:14:28.042 "data_size": 65536 00:14:28.042 }, 00:14:28.042 { 00:14:28.042 "name": "BaseBdev3", 00:14:28.042 "uuid": "522b133d-42d0-11ef-96ac-773515fba644", 00:14:28.042 "is_configured": true, 00:14:28.042 "data_offset": 0, 00:14:28.042 "data_size": 65536 00:14:28.042 }, 00:14:28.042 { 00:14:28.042 "name": "BaseBdev4", 00:14:28.042 "uuid": "52b3cb93-42d0-11ef-96ac-773515fba644", 00:14:28.042 "is_configured": true, 00:14:28.042 "data_offset": 0, 00:14:28.042 "data_size": 65536 00:14:28.042 } 00:14:28.042 ] 00:14:28.042 }' 00:14:28.042 17:33:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:28.042 17:33:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.608 17:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.608 17:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:28.608 17:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:14:28.608 17:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:28.867 [2024-07-15 17:33:24.664294] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:28.867 17:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:28.867 17:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:28.867 17:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:28.867 17:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:28.867 17:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:28.867 17:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:28.867 17:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:28.867 17:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:28.867 17:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:28.867 17:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:28.867 17:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.867 17:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:29.126 17:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:29.126 "name": "Existed_Raid", 00:14:29.126 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.126 "strip_size_kb": 64, 00:14:29.126 "state": "configuring", 00:14:29.126 "raid_level": "concat", 00:14:29.126 "superblock": false, 00:14:29.126 "num_base_bdevs": 4, 00:14:29.126 "num_base_bdevs_discovered": 2, 00:14:29.126 "num_base_bdevs_operational": 4, 00:14:29.126 "base_bdevs_list": [ 00:14:29.126 { 00:14:29.126 "name": null, 00:14:29.126 "uuid": "545dd950-42d0-11ef-96ac-773515fba644", 00:14:29.126 "is_configured": false, 00:14:29.126 "data_offset": 0, 00:14:29.126 "data_size": 65536 00:14:29.126 }, 00:14:29.126 { 00:14:29.126 "name": null, 00:14:29.126 "uuid": "51bac49b-42d0-11ef-96ac-773515fba644", 00:14:29.126 "is_configured": false, 00:14:29.126 "data_offset": 0, 00:14:29.126 "data_size": 65536 00:14:29.126 }, 00:14:29.126 { 00:14:29.126 "name": "BaseBdev3", 00:14:29.126 "uuid": "522b133d-42d0-11ef-96ac-773515fba644", 00:14:29.126 "is_configured": true, 00:14:29.126 "data_offset": 0, 00:14:29.126 "data_size": 65536 00:14:29.126 }, 00:14:29.126 { 00:14:29.126 "name": "BaseBdev4", 00:14:29.126 "uuid": "52b3cb93-42d0-11ef-96ac-773515fba644", 00:14:29.126 "is_configured": true, 00:14:29.126 "data_offset": 0, 00:14:29.126 "data_size": 65536 00:14:29.126 } 00:14:29.126 ] 00:14:29.126 }' 00:14:29.126 17:33:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:29.126 17:33:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.692 17:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:29.692 17:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:29.692 17:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:14:29.692 17:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:29.950 [2024-07-15 17:33:25.770300] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:30.209 17:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:30.209 17:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:30.209 17:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:30.209 17:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:30.209 17:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:30.210 17:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:30.210 17:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:30.210 17:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:30.210 17:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:30.210 17:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:30.210 17:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:30.210 17:33:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:30.504 17:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:30.504 "name": "Existed_Raid", 00:14:30.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.504 "strip_size_kb": 64, 00:14:30.504 "state": "configuring", 00:14:30.504 "raid_level": "concat", 00:14:30.504 "superblock": false, 00:14:30.504 "num_base_bdevs": 4, 00:14:30.504 "num_base_bdevs_discovered": 3, 00:14:30.504 "num_base_bdevs_operational": 4, 00:14:30.504 "base_bdevs_list": [ 00:14:30.504 { 00:14:30.504 "name": null, 00:14:30.504 "uuid": "545dd950-42d0-11ef-96ac-773515fba644", 00:14:30.504 "is_configured": false, 00:14:30.504 "data_offset": 0, 00:14:30.504 "data_size": 65536 00:14:30.504 }, 00:14:30.504 { 00:14:30.504 "name": "BaseBdev2", 00:14:30.504 "uuid": "51bac49b-42d0-11ef-96ac-773515fba644", 00:14:30.504 "is_configured": true, 00:14:30.504 "data_offset": 0, 00:14:30.504 "data_size": 65536 00:14:30.504 }, 00:14:30.504 { 00:14:30.504 "name": "BaseBdev3", 00:14:30.504 "uuid": "522b133d-42d0-11ef-96ac-773515fba644", 00:14:30.504 "is_configured": true, 00:14:30.504 "data_offset": 0, 00:14:30.504 "data_size": 65536 00:14:30.504 }, 00:14:30.504 { 00:14:30.504 "name": "BaseBdev4", 00:14:30.504 "uuid": "52b3cb93-42d0-11ef-96ac-773515fba644", 00:14:30.504 "is_configured": true, 00:14:30.504 "data_offset": 0, 00:14:30.504 "data_size": 65536 00:14:30.504 } 00:14:30.504 ] 00:14:30.504 }' 00:14:30.504 17:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:30.504 17:33:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.768 17:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:30.768 17:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:31.025 17:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:14:31.025 17:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:31.025 17:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:31.283 17:33:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 545dd950-42d0-11ef-96ac-773515fba644 00:14:31.283 [2024-07-15 17:33:27.106463] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:31.283 [2024-07-15 17:33:27.106490] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x5a08cc34f00 00:14:31.283 [2024-07-15 17:33:27.106495] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:31.283 [2024-07-15 17:33:27.106534] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x5a08cc97e20 00:14:31.283 [2024-07-15 17:33:27.106603] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x5a08cc34f00 00:14:31.283 [2024-07-15 17:33:27.106608] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x5a08cc34f00 00:14:31.283 [2024-07-15 17:33:27.106640] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:31.283 NewBaseBdev 00:14:31.541 17:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:14:31.541 17:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:14:31.541 17:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:31.541 17:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:31.541 17:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:31.541 17:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:31.541 17:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:31.541 17:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:31.799 [ 00:14:31.799 { 00:14:31.799 "name": "NewBaseBdev", 00:14:31.799 "aliases": [ 00:14:31.799 "545dd950-42d0-11ef-96ac-773515fba644" 00:14:31.799 ], 00:14:31.799 "product_name": "Malloc disk", 00:14:31.799 "block_size": 512, 00:14:31.799 "num_blocks": 65536, 00:14:31.799 "uuid": "545dd950-42d0-11ef-96ac-773515fba644", 00:14:31.799 "assigned_rate_limits": { 00:14:31.799 "rw_ios_per_sec": 0, 00:14:31.799 "rw_mbytes_per_sec": 0, 00:14:31.799 "r_mbytes_per_sec": 0, 00:14:31.799 "w_mbytes_per_sec": 0 00:14:31.799 }, 00:14:31.799 "claimed": true, 00:14:31.799 "claim_type": "exclusive_write", 00:14:31.799 "zoned": false, 00:14:31.799 "supported_io_types": { 00:14:31.799 "read": true, 00:14:31.799 "write": true, 00:14:31.799 "unmap": true, 00:14:31.799 "flush": true, 00:14:31.799 "reset": true, 00:14:31.799 "nvme_admin": false, 00:14:31.799 "nvme_io": false, 00:14:31.799 "nvme_io_md": false, 00:14:31.799 "write_zeroes": true, 00:14:31.799 "zcopy": true, 00:14:31.799 "get_zone_info": false, 00:14:31.800 "zone_management": false, 00:14:31.800 "zone_append": false, 00:14:31.800 "compare": false, 00:14:31.800 "compare_and_write": false, 00:14:31.800 "abort": true, 00:14:31.800 "seek_hole": false, 00:14:31.800 "seek_data": false, 00:14:31.800 "copy": true, 00:14:31.800 "nvme_iov_md": false 00:14:31.800 }, 00:14:31.800 "memory_domains": [ 00:14:31.800 { 00:14:31.800 "dma_device_id": "system", 00:14:31.800 "dma_device_type": 1 00:14:31.800 }, 00:14:31.800 { 00:14:31.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.800 "dma_device_type": 2 00:14:31.800 } 00:14:31.800 ], 00:14:31.800 "driver_specific": {} 00:14:31.800 } 00:14:31.800 ] 00:14:31.800 17:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:31.800 17:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:31.800 17:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:31.800 17:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:31.800 17:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:31.800 17:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:31.800 17:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:31.800 17:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:31.800 17:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:31.800 17:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:31.800 17:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:31.800 17:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:31.800 17:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:32.058 17:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:32.058 "name": "Existed_Raid", 00:14:32.058 "uuid": "58228e56-42d0-11ef-96ac-773515fba644", 00:14:32.058 "strip_size_kb": 64, 00:14:32.058 "state": "online", 00:14:32.058 "raid_level": "concat", 00:14:32.058 "superblock": false, 00:14:32.058 "num_base_bdevs": 4, 00:14:32.058 "num_base_bdevs_discovered": 4, 00:14:32.058 "num_base_bdevs_operational": 4, 00:14:32.058 "base_bdevs_list": [ 00:14:32.058 { 00:14:32.058 "name": "NewBaseBdev", 00:14:32.058 "uuid": "545dd950-42d0-11ef-96ac-773515fba644", 00:14:32.058 "is_configured": true, 00:14:32.058 "data_offset": 0, 00:14:32.058 "data_size": 65536 00:14:32.058 }, 00:14:32.058 { 00:14:32.058 "name": "BaseBdev2", 00:14:32.058 "uuid": "51bac49b-42d0-11ef-96ac-773515fba644", 00:14:32.058 "is_configured": true, 00:14:32.058 "data_offset": 0, 00:14:32.058 "data_size": 65536 00:14:32.058 }, 00:14:32.058 { 00:14:32.058 "name": "BaseBdev3", 00:14:32.058 "uuid": "522b133d-42d0-11ef-96ac-773515fba644", 00:14:32.058 "is_configured": true, 00:14:32.058 "data_offset": 0, 00:14:32.058 "data_size": 65536 00:14:32.058 }, 00:14:32.058 { 00:14:32.058 "name": "BaseBdev4", 00:14:32.058 "uuid": "52b3cb93-42d0-11ef-96ac-773515fba644", 00:14:32.058 "is_configured": true, 00:14:32.058 "data_offset": 0, 00:14:32.058 "data_size": 65536 00:14:32.058 } 00:14:32.058 ] 00:14:32.058 }' 00:14:32.058 17:33:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:32.058 17:33:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.623 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:14:32.623 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:32.623 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:32.623 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:32.623 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:32.623 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:32.623 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:32.623 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:32.623 [2024-07-15 17:33:28.426383] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:32.623 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:32.623 "name": "Existed_Raid", 00:14:32.623 "aliases": [ 00:14:32.623 "58228e56-42d0-11ef-96ac-773515fba644" 00:14:32.623 ], 00:14:32.623 "product_name": "Raid Volume", 00:14:32.623 "block_size": 512, 00:14:32.623 "num_blocks": 262144, 00:14:32.623 "uuid": "58228e56-42d0-11ef-96ac-773515fba644", 00:14:32.623 "assigned_rate_limits": { 00:14:32.623 "rw_ios_per_sec": 0, 00:14:32.623 "rw_mbytes_per_sec": 0, 00:14:32.623 "r_mbytes_per_sec": 0, 00:14:32.623 "w_mbytes_per_sec": 0 00:14:32.623 }, 00:14:32.623 "claimed": false, 00:14:32.623 "zoned": false, 00:14:32.623 "supported_io_types": { 00:14:32.623 "read": true, 00:14:32.624 "write": true, 00:14:32.624 "unmap": true, 00:14:32.624 "flush": true, 00:14:32.624 "reset": true, 00:14:32.624 "nvme_admin": false, 00:14:32.624 "nvme_io": false, 00:14:32.624 "nvme_io_md": false, 00:14:32.624 "write_zeroes": true, 00:14:32.624 "zcopy": false, 00:14:32.624 "get_zone_info": false, 00:14:32.624 "zone_management": false, 00:14:32.624 "zone_append": false, 00:14:32.624 "compare": false, 00:14:32.624 "compare_and_write": false, 00:14:32.624 "abort": false, 00:14:32.624 "seek_hole": false, 00:14:32.624 "seek_data": false, 00:14:32.624 "copy": false, 00:14:32.624 "nvme_iov_md": false 00:14:32.624 }, 00:14:32.624 "memory_domains": [ 00:14:32.624 { 00:14:32.624 "dma_device_id": "system", 00:14:32.624 "dma_device_type": 1 00:14:32.624 }, 00:14:32.624 { 00:14:32.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.624 "dma_device_type": 2 00:14:32.624 }, 00:14:32.624 { 00:14:32.624 "dma_device_id": "system", 00:14:32.624 "dma_device_type": 1 00:14:32.624 }, 00:14:32.624 { 00:14:32.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.624 "dma_device_type": 2 00:14:32.624 }, 00:14:32.624 { 00:14:32.624 "dma_device_id": "system", 00:14:32.624 "dma_device_type": 1 00:14:32.624 }, 00:14:32.624 { 00:14:32.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.624 "dma_device_type": 2 00:14:32.624 }, 00:14:32.624 { 00:14:32.624 "dma_device_id": "system", 00:14:32.624 "dma_device_type": 1 00:14:32.624 }, 00:14:32.624 { 00:14:32.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.624 "dma_device_type": 2 00:14:32.624 } 00:14:32.624 ], 00:14:32.624 "driver_specific": { 00:14:32.624 "raid": { 00:14:32.624 "uuid": "58228e56-42d0-11ef-96ac-773515fba644", 00:14:32.624 "strip_size_kb": 64, 00:14:32.624 "state": "online", 00:14:32.624 "raid_level": "concat", 00:14:32.624 "superblock": false, 00:14:32.624 "num_base_bdevs": 4, 00:14:32.624 "num_base_bdevs_discovered": 4, 00:14:32.624 "num_base_bdevs_operational": 4, 00:14:32.624 "base_bdevs_list": [ 00:14:32.624 { 00:14:32.624 "name": "NewBaseBdev", 00:14:32.624 "uuid": "545dd950-42d0-11ef-96ac-773515fba644", 00:14:32.624 "is_configured": true, 00:14:32.624 "data_offset": 0, 00:14:32.624 "data_size": 65536 00:14:32.624 }, 00:14:32.624 { 00:14:32.624 "name": "BaseBdev2", 00:14:32.624 "uuid": "51bac49b-42d0-11ef-96ac-773515fba644", 00:14:32.624 "is_configured": true, 00:14:32.624 "data_offset": 0, 00:14:32.624 "data_size": 65536 00:14:32.624 }, 00:14:32.624 { 00:14:32.624 "name": "BaseBdev3", 00:14:32.624 "uuid": "522b133d-42d0-11ef-96ac-773515fba644", 00:14:32.624 "is_configured": true, 00:14:32.624 "data_offset": 0, 00:14:32.624 "data_size": 65536 00:14:32.624 }, 00:14:32.624 { 00:14:32.624 "name": "BaseBdev4", 00:14:32.624 "uuid": "52b3cb93-42d0-11ef-96ac-773515fba644", 00:14:32.624 "is_configured": true, 00:14:32.624 "data_offset": 0, 00:14:32.624 "data_size": 65536 00:14:32.624 } 00:14:32.624 ] 00:14:32.624 } 00:14:32.624 } 00:14:32.624 }' 00:14:32.624 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:32.624 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:14:32.624 BaseBdev2 00:14:32.624 BaseBdev3 00:14:32.624 BaseBdev4' 00:14:32.624 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:32.624 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:32.624 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:32.882 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:32.882 "name": "NewBaseBdev", 00:14:32.882 "aliases": [ 00:14:32.882 "545dd950-42d0-11ef-96ac-773515fba644" 00:14:32.882 ], 00:14:32.882 "product_name": "Malloc disk", 00:14:32.882 "block_size": 512, 00:14:32.882 "num_blocks": 65536, 00:14:32.882 "uuid": "545dd950-42d0-11ef-96ac-773515fba644", 00:14:32.882 "assigned_rate_limits": { 00:14:32.882 "rw_ios_per_sec": 0, 00:14:32.882 "rw_mbytes_per_sec": 0, 00:14:32.882 "r_mbytes_per_sec": 0, 00:14:32.882 "w_mbytes_per_sec": 0 00:14:32.882 }, 00:14:32.882 "claimed": true, 00:14:32.882 "claim_type": "exclusive_write", 00:14:32.882 "zoned": false, 00:14:32.882 "supported_io_types": { 00:14:32.882 "read": true, 00:14:32.882 "write": true, 00:14:32.882 "unmap": true, 00:14:32.882 "flush": true, 00:14:32.882 "reset": true, 00:14:32.882 "nvme_admin": false, 00:14:32.882 "nvme_io": false, 00:14:32.882 "nvme_io_md": false, 00:14:32.882 "write_zeroes": true, 00:14:32.882 "zcopy": true, 00:14:32.882 "get_zone_info": false, 00:14:32.882 "zone_management": false, 00:14:32.882 "zone_append": false, 00:14:32.882 "compare": false, 00:14:32.882 "compare_and_write": false, 00:14:32.882 "abort": true, 00:14:32.882 "seek_hole": false, 00:14:32.882 "seek_data": false, 00:14:32.882 "copy": true, 00:14:32.882 "nvme_iov_md": false 00:14:32.882 }, 00:14:32.882 "memory_domains": [ 00:14:32.882 { 00:14:32.882 "dma_device_id": "system", 00:14:32.882 "dma_device_type": 1 00:14:32.882 }, 00:14:32.882 { 00:14:32.882 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:32.882 "dma_device_type": 2 00:14:32.882 } 00:14:32.882 ], 00:14:32.882 "driver_specific": {} 00:14:32.882 }' 00:14:32.882 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:32.882 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:32.882 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:32.882 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:32.882 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:32.882 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:32.882 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:32.882 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:33.140 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:33.140 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:33.140 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:33.140 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:33.140 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:33.140 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:33.140 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:33.140 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:33.140 "name": "BaseBdev2", 00:14:33.140 "aliases": [ 00:14:33.140 "51bac49b-42d0-11ef-96ac-773515fba644" 00:14:33.140 ], 00:14:33.140 "product_name": "Malloc disk", 00:14:33.140 "block_size": 512, 00:14:33.140 "num_blocks": 65536, 00:14:33.140 "uuid": "51bac49b-42d0-11ef-96ac-773515fba644", 00:14:33.140 "assigned_rate_limits": { 00:14:33.140 "rw_ios_per_sec": 0, 00:14:33.140 "rw_mbytes_per_sec": 0, 00:14:33.140 "r_mbytes_per_sec": 0, 00:14:33.140 "w_mbytes_per_sec": 0 00:14:33.140 }, 00:14:33.140 "claimed": true, 00:14:33.140 "claim_type": "exclusive_write", 00:14:33.140 "zoned": false, 00:14:33.140 "supported_io_types": { 00:14:33.140 "read": true, 00:14:33.140 "write": true, 00:14:33.140 "unmap": true, 00:14:33.140 "flush": true, 00:14:33.140 "reset": true, 00:14:33.140 "nvme_admin": false, 00:14:33.140 "nvme_io": false, 00:14:33.140 "nvme_io_md": false, 00:14:33.140 "write_zeroes": true, 00:14:33.140 "zcopy": true, 00:14:33.140 "get_zone_info": false, 00:14:33.140 "zone_management": false, 00:14:33.140 "zone_append": false, 00:14:33.140 "compare": false, 00:14:33.140 "compare_and_write": false, 00:14:33.140 "abort": true, 00:14:33.140 "seek_hole": false, 00:14:33.140 "seek_data": false, 00:14:33.140 "copy": true, 00:14:33.140 "nvme_iov_md": false 00:14:33.140 }, 00:14:33.140 "memory_domains": [ 00:14:33.140 { 00:14:33.140 "dma_device_id": "system", 00:14:33.140 "dma_device_type": 1 00:14:33.140 }, 00:14:33.140 { 00:14:33.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.140 "dma_device_type": 2 00:14:33.140 } 00:14:33.140 ], 00:14:33.140 "driver_specific": {} 00:14:33.140 }' 00:14:33.140 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:33.398 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:33.398 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:33.398 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:33.398 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:33.398 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:33.398 17:33:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:33.398 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:33.398 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:33.398 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:33.398 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:33.398 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:33.398 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:33.398 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:33.398 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:33.656 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:33.656 "name": "BaseBdev3", 00:14:33.656 "aliases": [ 00:14:33.656 "522b133d-42d0-11ef-96ac-773515fba644" 00:14:33.656 ], 00:14:33.656 "product_name": "Malloc disk", 00:14:33.656 "block_size": 512, 00:14:33.656 "num_blocks": 65536, 00:14:33.656 "uuid": "522b133d-42d0-11ef-96ac-773515fba644", 00:14:33.656 "assigned_rate_limits": { 00:14:33.656 "rw_ios_per_sec": 0, 00:14:33.656 "rw_mbytes_per_sec": 0, 00:14:33.656 "r_mbytes_per_sec": 0, 00:14:33.656 "w_mbytes_per_sec": 0 00:14:33.656 }, 00:14:33.656 "claimed": true, 00:14:33.656 "claim_type": "exclusive_write", 00:14:33.656 "zoned": false, 00:14:33.656 "supported_io_types": { 00:14:33.656 "read": true, 00:14:33.656 "write": true, 00:14:33.656 "unmap": true, 00:14:33.656 "flush": true, 00:14:33.656 "reset": true, 00:14:33.656 "nvme_admin": false, 00:14:33.656 "nvme_io": false, 00:14:33.656 "nvme_io_md": false, 00:14:33.656 "write_zeroes": true, 00:14:33.656 "zcopy": true, 00:14:33.656 "get_zone_info": false, 00:14:33.656 "zone_management": false, 00:14:33.656 "zone_append": false, 00:14:33.656 "compare": false, 00:14:33.656 "compare_and_write": false, 00:14:33.656 "abort": true, 00:14:33.656 "seek_hole": false, 00:14:33.656 "seek_data": false, 00:14:33.656 "copy": true, 00:14:33.656 "nvme_iov_md": false 00:14:33.656 }, 00:14:33.656 "memory_domains": [ 00:14:33.656 { 00:14:33.656 "dma_device_id": "system", 00:14:33.656 "dma_device_type": 1 00:14:33.656 }, 00:14:33.656 { 00:14:33.656 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.656 "dma_device_type": 2 00:14:33.656 } 00:14:33.656 ], 00:14:33.656 "driver_specific": {} 00:14:33.656 }' 00:14:33.656 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:33.656 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:33.656 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:33.656 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:33.656 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:33.656 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:33.656 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:33.657 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:33.657 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:33.657 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:33.657 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:33.657 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:33.657 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:33.657 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:14:33.657 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:33.915 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:33.915 "name": "BaseBdev4", 00:14:33.915 "aliases": [ 00:14:33.915 "52b3cb93-42d0-11ef-96ac-773515fba644" 00:14:33.915 ], 00:14:33.915 "product_name": "Malloc disk", 00:14:33.915 "block_size": 512, 00:14:33.915 "num_blocks": 65536, 00:14:33.915 "uuid": "52b3cb93-42d0-11ef-96ac-773515fba644", 00:14:33.915 "assigned_rate_limits": { 00:14:33.915 "rw_ios_per_sec": 0, 00:14:33.915 "rw_mbytes_per_sec": 0, 00:14:33.915 "r_mbytes_per_sec": 0, 00:14:33.915 "w_mbytes_per_sec": 0 00:14:33.915 }, 00:14:33.915 "claimed": true, 00:14:33.915 "claim_type": "exclusive_write", 00:14:33.915 "zoned": false, 00:14:33.915 "supported_io_types": { 00:14:33.915 "read": true, 00:14:33.915 "write": true, 00:14:33.915 "unmap": true, 00:14:33.915 "flush": true, 00:14:33.915 "reset": true, 00:14:33.915 "nvme_admin": false, 00:14:33.915 "nvme_io": false, 00:14:33.915 "nvme_io_md": false, 00:14:33.915 "write_zeroes": true, 00:14:33.915 "zcopy": true, 00:14:33.915 "get_zone_info": false, 00:14:33.915 "zone_management": false, 00:14:33.915 "zone_append": false, 00:14:33.915 "compare": false, 00:14:33.915 "compare_and_write": false, 00:14:33.915 "abort": true, 00:14:33.915 "seek_hole": false, 00:14:33.915 "seek_data": false, 00:14:33.915 "copy": true, 00:14:33.915 "nvme_iov_md": false 00:14:33.915 }, 00:14:33.915 "memory_domains": [ 00:14:33.915 { 00:14:33.915 "dma_device_id": "system", 00:14:33.915 "dma_device_type": 1 00:14:33.915 }, 00:14:33.915 { 00:14:33.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:33.915 "dma_device_type": 2 00:14:33.915 } 00:14:33.915 ], 00:14:33.915 "driver_specific": {} 00:14:33.915 }' 00:14:33.915 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:33.915 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:33.915 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:33.915 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:33.915 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:33.915 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:33.915 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:33.915 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:33.915 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:33.915 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:33.915 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:33.915 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:33.915 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:34.215 [2024-07-15 17:33:29.950369] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:34.215 [2024-07-15 17:33:29.950392] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:34.215 [2024-07-15 17:33:29.950415] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:34.215 [2024-07-15 17:33:29.950429] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:34.215 [2024-07-15 17:33:29.950434] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x5a08cc34f00 name Existed_Raid, state offline 00:14:34.215 17:33:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 60679 00:14:34.215 17:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 60679 ']' 00:14:34.215 17:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 60679 00:14:34.215 17:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:14:34.215 17:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:14:34.215 17:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:14:34.215 17:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 60679 00:14:34.215 17:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:14:34.215 killing process with pid 60679 00:14:34.215 17:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:14:34.215 17:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60679' 00:14:34.215 17:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 60679 00:14:34.215 [2024-07-15 17:33:29.978069] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:34.215 17:33:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 60679 00:14:34.215 [2024-07-15 17:33:30.001506] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:14:34.504 00:14:34.504 real 0m26.843s 00:14:34.504 user 0m49.313s 00:14:34.504 sys 0m3.524s 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:34.504 ************************************ 00:14:34.504 END TEST raid_state_function_test 00:14:34.504 ************************************ 00:14:34.504 17:33:30 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:34.504 17:33:30 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:14:34.504 17:33:30 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:34.504 17:33:30 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:34.504 17:33:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:34.504 ************************************ 00:14:34.504 START TEST raid_state_function_test_sb 00:14:34.504 ************************************ 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 true 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=61494 00:14:34.504 Process raid pid: 61494 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 61494' 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 61494 /var/tmp/spdk-raid.sock 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 61494 ']' 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:34.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:34.504 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:34.505 17:33:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.505 [2024-07-15 17:33:30.233935] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:14:34.505 [2024-07-15 17:33:30.234074] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:14:35.069 EAL: TSC is not safe to use in SMP mode 00:14:35.069 EAL: TSC is not invariant 00:14:35.069 [2024-07-15 17:33:30.781025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.069 [2024-07-15 17:33:30.879036] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:14:35.069 [2024-07-15 17:33:30.881452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.069 [2024-07-15 17:33:30.882360] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:35.069 [2024-07-15 17:33:30.882377] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:35.636 17:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:35.636 17:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:14:35.636 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:35.894 [2024-07-15 17:33:31.551510] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:35.894 [2024-07-15 17:33:31.551555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:35.894 [2024-07-15 17:33:31.551561] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:35.894 [2024-07-15 17:33:31.551570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:35.894 [2024-07-15 17:33:31.551574] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:35.894 [2024-07-15 17:33:31.551581] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:35.894 [2024-07-15 17:33:31.551584] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:35.894 [2024-07-15 17:33:31.551591] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:35.894 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:35.894 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:35.894 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:35.894 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:35.894 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:35.894 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:35.894 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:35.894 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:35.894 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:35.894 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:35.894 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:35.894 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.151 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:36.151 "name": "Existed_Raid", 00:14:36.151 "uuid": "5ac8cea2-42d0-11ef-96ac-773515fba644", 00:14:36.151 "strip_size_kb": 64, 00:14:36.151 "state": "configuring", 00:14:36.151 "raid_level": "concat", 00:14:36.151 "superblock": true, 00:14:36.151 "num_base_bdevs": 4, 00:14:36.151 "num_base_bdevs_discovered": 0, 00:14:36.151 "num_base_bdevs_operational": 4, 00:14:36.151 "base_bdevs_list": [ 00:14:36.152 { 00:14:36.152 "name": "BaseBdev1", 00:14:36.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.152 "is_configured": false, 00:14:36.152 "data_offset": 0, 00:14:36.152 "data_size": 0 00:14:36.152 }, 00:14:36.152 { 00:14:36.152 "name": "BaseBdev2", 00:14:36.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.152 "is_configured": false, 00:14:36.152 "data_offset": 0, 00:14:36.152 "data_size": 0 00:14:36.152 }, 00:14:36.152 { 00:14:36.152 "name": "BaseBdev3", 00:14:36.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.152 "is_configured": false, 00:14:36.152 "data_offset": 0, 00:14:36.152 "data_size": 0 00:14:36.152 }, 00:14:36.152 { 00:14:36.152 "name": "BaseBdev4", 00:14:36.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.152 "is_configured": false, 00:14:36.152 "data_offset": 0, 00:14:36.152 "data_size": 0 00:14:36.152 } 00:14:36.152 ] 00:14:36.152 }' 00:14:36.152 17:33:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:36.152 17:33:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.409 17:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:36.692 [2024-07-15 17:33:32.391501] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:36.692 [2024-07-15 17:33:32.391531] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c2ad2434500 name Existed_Raid, state configuring 00:14:36.692 17:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:36.950 [2024-07-15 17:33:32.623520] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:36.950 [2024-07-15 17:33:32.623579] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:36.950 [2024-07-15 17:33:32.623585] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:36.950 [2024-07-15 17:33:32.623593] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:36.950 [2024-07-15 17:33:32.623596] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:36.950 [2024-07-15 17:33:32.623604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:36.950 [2024-07-15 17:33:32.623607] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:36.950 [2024-07-15 17:33:32.623614] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:36.950 17:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:37.208 [2024-07-15 17:33:32.864530] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:37.208 BaseBdev1 00:14:37.208 17:33:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:37.208 17:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:37.208 17:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:37.208 17:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:37.208 17:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:37.208 17:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:37.208 17:33:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:37.466 17:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:37.724 [ 00:14:37.724 { 00:14:37.724 "name": "BaseBdev1", 00:14:37.724 "aliases": [ 00:14:37.724 "5b910129-42d0-11ef-96ac-773515fba644" 00:14:37.724 ], 00:14:37.724 "product_name": "Malloc disk", 00:14:37.724 "block_size": 512, 00:14:37.724 "num_blocks": 65536, 00:14:37.724 "uuid": "5b910129-42d0-11ef-96ac-773515fba644", 00:14:37.724 "assigned_rate_limits": { 00:14:37.724 "rw_ios_per_sec": 0, 00:14:37.724 "rw_mbytes_per_sec": 0, 00:14:37.724 "r_mbytes_per_sec": 0, 00:14:37.724 "w_mbytes_per_sec": 0 00:14:37.724 }, 00:14:37.724 "claimed": true, 00:14:37.724 "claim_type": "exclusive_write", 00:14:37.724 "zoned": false, 00:14:37.724 "supported_io_types": { 00:14:37.724 "read": true, 00:14:37.724 "write": true, 00:14:37.724 "unmap": true, 00:14:37.724 "flush": true, 00:14:37.724 "reset": true, 00:14:37.724 "nvme_admin": false, 00:14:37.724 "nvme_io": false, 00:14:37.724 "nvme_io_md": false, 00:14:37.724 "write_zeroes": true, 00:14:37.724 "zcopy": true, 00:14:37.724 "get_zone_info": false, 00:14:37.724 "zone_management": false, 00:14:37.724 "zone_append": false, 00:14:37.724 "compare": false, 00:14:37.724 "compare_and_write": false, 00:14:37.724 "abort": true, 00:14:37.724 "seek_hole": false, 00:14:37.724 "seek_data": false, 00:14:37.724 "copy": true, 00:14:37.724 "nvme_iov_md": false 00:14:37.724 }, 00:14:37.724 "memory_domains": [ 00:14:37.724 { 00:14:37.724 "dma_device_id": "system", 00:14:37.724 "dma_device_type": 1 00:14:37.724 }, 00:14:37.724 { 00:14:37.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.724 "dma_device_type": 2 00:14:37.724 } 00:14:37.724 ], 00:14:37.724 "driver_specific": {} 00:14:37.724 } 00:14:37.724 ] 00:14:37.724 17:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:37.724 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:37.724 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:37.724 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:37.724 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:37.724 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:37.724 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:37.724 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:37.724 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:37.724 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:37.724 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:37.724 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.724 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.981 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:37.981 "name": "Existed_Raid", 00:14:37.981 "uuid": "5b6c620c-42d0-11ef-96ac-773515fba644", 00:14:37.981 "strip_size_kb": 64, 00:14:37.981 "state": "configuring", 00:14:37.981 "raid_level": "concat", 00:14:37.981 "superblock": true, 00:14:37.981 "num_base_bdevs": 4, 00:14:37.981 "num_base_bdevs_discovered": 1, 00:14:37.981 "num_base_bdevs_operational": 4, 00:14:37.981 "base_bdevs_list": [ 00:14:37.981 { 00:14:37.981 "name": "BaseBdev1", 00:14:37.981 "uuid": "5b910129-42d0-11ef-96ac-773515fba644", 00:14:37.981 "is_configured": true, 00:14:37.981 "data_offset": 2048, 00:14:37.981 "data_size": 63488 00:14:37.981 }, 00:14:37.981 { 00:14:37.981 "name": "BaseBdev2", 00:14:37.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.981 "is_configured": false, 00:14:37.981 "data_offset": 0, 00:14:37.981 "data_size": 0 00:14:37.981 }, 00:14:37.981 { 00:14:37.981 "name": "BaseBdev3", 00:14:37.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.981 "is_configured": false, 00:14:37.981 "data_offset": 0, 00:14:37.981 "data_size": 0 00:14:37.981 }, 00:14:37.981 { 00:14:37.981 "name": "BaseBdev4", 00:14:37.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.981 "is_configured": false, 00:14:37.981 "data_offset": 0, 00:14:37.981 "data_size": 0 00:14:37.981 } 00:14:37.981 ] 00:14:37.981 }' 00:14:37.981 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:37.981 17:33:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.238 17:33:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:38.495 [2024-07-15 17:33:34.151632] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:38.495 [2024-07-15 17:33:34.151664] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c2ad2434500 name Existed_Raid, state configuring 00:14:38.495 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:38.753 [2024-07-15 17:33:34.379667] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:38.753 [2024-07-15 17:33:34.380454] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:38.753 [2024-07-15 17:33:34.380491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:38.753 [2024-07-15 17:33:34.380502] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:38.753 [2024-07-15 17:33:34.380510] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:38.753 [2024-07-15 17:33:34.380514] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:38.753 [2024-07-15 17:33:34.380522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:38.753 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:38.753 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:38.753 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:38.753 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:38.753 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:38.754 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:38.754 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:38.754 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:38.754 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:38.754 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:38.754 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:38.754 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:38.754 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:38.754 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:39.011 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:39.011 "name": "Existed_Raid", 00:14:39.011 "uuid": "5c78596a-42d0-11ef-96ac-773515fba644", 00:14:39.011 "strip_size_kb": 64, 00:14:39.011 "state": "configuring", 00:14:39.011 "raid_level": "concat", 00:14:39.011 "superblock": true, 00:14:39.011 "num_base_bdevs": 4, 00:14:39.011 "num_base_bdevs_discovered": 1, 00:14:39.011 "num_base_bdevs_operational": 4, 00:14:39.011 "base_bdevs_list": [ 00:14:39.011 { 00:14:39.011 "name": "BaseBdev1", 00:14:39.011 "uuid": "5b910129-42d0-11ef-96ac-773515fba644", 00:14:39.011 "is_configured": true, 00:14:39.011 "data_offset": 2048, 00:14:39.011 "data_size": 63488 00:14:39.011 }, 00:14:39.011 { 00:14:39.011 "name": "BaseBdev2", 00:14:39.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.011 "is_configured": false, 00:14:39.011 "data_offset": 0, 00:14:39.011 "data_size": 0 00:14:39.011 }, 00:14:39.011 { 00:14:39.011 "name": "BaseBdev3", 00:14:39.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.011 "is_configured": false, 00:14:39.011 "data_offset": 0, 00:14:39.011 "data_size": 0 00:14:39.011 }, 00:14:39.011 { 00:14:39.011 "name": "BaseBdev4", 00:14:39.011 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:39.011 "is_configured": false, 00:14:39.011 "data_offset": 0, 00:14:39.011 "data_size": 0 00:14:39.011 } 00:14:39.011 ] 00:14:39.011 }' 00:14:39.011 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:39.011 17:33:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.269 17:33:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:39.527 [2024-07-15 17:33:35.219828] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:39.527 BaseBdev2 00:14:39.527 17:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:39.527 17:33:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:39.527 17:33:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:39.527 17:33:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:39.527 17:33:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:39.527 17:33:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:39.527 17:33:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:39.785 17:33:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:40.043 [ 00:14:40.043 { 00:14:40.043 "name": "BaseBdev2", 00:14:40.043 "aliases": [ 00:14:40.043 "5cf886f2-42d0-11ef-96ac-773515fba644" 00:14:40.043 ], 00:14:40.043 "product_name": "Malloc disk", 00:14:40.043 "block_size": 512, 00:14:40.043 "num_blocks": 65536, 00:14:40.043 "uuid": "5cf886f2-42d0-11ef-96ac-773515fba644", 00:14:40.043 "assigned_rate_limits": { 00:14:40.043 "rw_ios_per_sec": 0, 00:14:40.043 "rw_mbytes_per_sec": 0, 00:14:40.043 "r_mbytes_per_sec": 0, 00:14:40.043 "w_mbytes_per_sec": 0 00:14:40.043 }, 00:14:40.043 "claimed": true, 00:14:40.043 "claim_type": "exclusive_write", 00:14:40.043 "zoned": false, 00:14:40.043 "supported_io_types": { 00:14:40.043 "read": true, 00:14:40.043 "write": true, 00:14:40.043 "unmap": true, 00:14:40.043 "flush": true, 00:14:40.043 "reset": true, 00:14:40.043 "nvme_admin": false, 00:14:40.043 "nvme_io": false, 00:14:40.043 "nvme_io_md": false, 00:14:40.043 "write_zeroes": true, 00:14:40.043 "zcopy": true, 00:14:40.043 "get_zone_info": false, 00:14:40.043 "zone_management": false, 00:14:40.043 "zone_append": false, 00:14:40.043 "compare": false, 00:14:40.043 "compare_and_write": false, 00:14:40.043 "abort": true, 00:14:40.043 "seek_hole": false, 00:14:40.043 "seek_data": false, 00:14:40.043 "copy": true, 00:14:40.043 "nvme_iov_md": false 00:14:40.043 }, 00:14:40.043 "memory_domains": [ 00:14:40.043 { 00:14:40.043 "dma_device_id": "system", 00:14:40.043 "dma_device_type": 1 00:14:40.043 }, 00:14:40.043 { 00:14:40.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.043 "dma_device_type": 2 00:14:40.043 } 00:14:40.043 ], 00:14:40.043 "driver_specific": {} 00:14:40.043 } 00:14:40.043 ] 00:14:40.043 17:33:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:40.043 17:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:40.043 17:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:40.043 17:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:40.043 17:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:40.043 17:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:40.043 17:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:40.043 17:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:40.043 17:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:40.043 17:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:40.043 17:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:40.043 17:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:40.043 17:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:40.043 17:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:40.043 17:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.302 17:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:40.302 "name": "Existed_Raid", 00:14:40.302 "uuid": "5c78596a-42d0-11ef-96ac-773515fba644", 00:14:40.302 "strip_size_kb": 64, 00:14:40.302 "state": "configuring", 00:14:40.302 "raid_level": "concat", 00:14:40.302 "superblock": true, 00:14:40.302 "num_base_bdevs": 4, 00:14:40.302 "num_base_bdevs_discovered": 2, 00:14:40.302 "num_base_bdevs_operational": 4, 00:14:40.302 "base_bdevs_list": [ 00:14:40.302 { 00:14:40.302 "name": "BaseBdev1", 00:14:40.302 "uuid": "5b910129-42d0-11ef-96ac-773515fba644", 00:14:40.302 "is_configured": true, 00:14:40.302 "data_offset": 2048, 00:14:40.302 "data_size": 63488 00:14:40.302 }, 00:14:40.302 { 00:14:40.302 "name": "BaseBdev2", 00:14:40.302 "uuid": "5cf886f2-42d0-11ef-96ac-773515fba644", 00:14:40.302 "is_configured": true, 00:14:40.302 "data_offset": 2048, 00:14:40.302 "data_size": 63488 00:14:40.302 }, 00:14:40.302 { 00:14:40.302 "name": "BaseBdev3", 00:14:40.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.302 "is_configured": false, 00:14:40.302 "data_offset": 0, 00:14:40.302 "data_size": 0 00:14:40.302 }, 00:14:40.302 { 00:14:40.302 "name": "BaseBdev4", 00:14:40.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:40.302 "is_configured": false, 00:14:40.302 "data_offset": 0, 00:14:40.302 "data_size": 0 00:14:40.302 } 00:14:40.302 ] 00:14:40.302 }' 00:14:40.302 17:33:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:40.302 17:33:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.559 17:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:40.816 [2024-07-15 17:33:36.551864] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:40.816 BaseBdev3 00:14:40.816 17:33:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:14:40.816 17:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:40.816 17:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:40.816 17:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:40.816 17:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:40.816 17:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:40.817 17:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:41.074 17:33:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:41.332 [ 00:14:41.332 { 00:14:41.332 "name": "BaseBdev3", 00:14:41.332 "aliases": [ 00:14:41.332 "5dc3c879-42d0-11ef-96ac-773515fba644" 00:14:41.332 ], 00:14:41.332 "product_name": "Malloc disk", 00:14:41.332 "block_size": 512, 00:14:41.332 "num_blocks": 65536, 00:14:41.332 "uuid": "5dc3c879-42d0-11ef-96ac-773515fba644", 00:14:41.332 "assigned_rate_limits": { 00:14:41.332 "rw_ios_per_sec": 0, 00:14:41.332 "rw_mbytes_per_sec": 0, 00:14:41.332 "r_mbytes_per_sec": 0, 00:14:41.332 "w_mbytes_per_sec": 0 00:14:41.332 }, 00:14:41.332 "claimed": true, 00:14:41.332 "claim_type": "exclusive_write", 00:14:41.332 "zoned": false, 00:14:41.332 "supported_io_types": { 00:14:41.332 "read": true, 00:14:41.332 "write": true, 00:14:41.332 "unmap": true, 00:14:41.332 "flush": true, 00:14:41.332 "reset": true, 00:14:41.332 "nvme_admin": false, 00:14:41.332 "nvme_io": false, 00:14:41.332 "nvme_io_md": false, 00:14:41.332 "write_zeroes": true, 00:14:41.332 "zcopy": true, 00:14:41.332 "get_zone_info": false, 00:14:41.332 "zone_management": false, 00:14:41.332 "zone_append": false, 00:14:41.332 "compare": false, 00:14:41.332 "compare_and_write": false, 00:14:41.332 "abort": true, 00:14:41.332 "seek_hole": false, 00:14:41.332 "seek_data": false, 00:14:41.332 "copy": true, 00:14:41.332 "nvme_iov_md": false 00:14:41.332 }, 00:14:41.332 "memory_domains": [ 00:14:41.332 { 00:14:41.332 "dma_device_id": "system", 00:14:41.332 "dma_device_type": 1 00:14:41.332 }, 00:14:41.332 { 00:14:41.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.332 "dma_device_type": 2 00:14:41.332 } 00:14:41.332 ], 00:14:41.332 "driver_specific": {} 00:14:41.332 } 00:14:41.332 ] 00:14:41.332 17:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:41.332 17:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:41.332 17:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:41.332 17:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:41.332 17:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:41.332 17:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:41.332 17:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:41.332 17:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:41.332 17:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:41.332 17:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:41.332 17:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:41.332 17:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:41.332 17:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:41.332 17:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.332 17:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.590 17:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:41.590 "name": "Existed_Raid", 00:14:41.590 "uuid": "5c78596a-42d0-11ef-96ac-773515fba644", 00:14:41.590 "strip_size_kb": 64, 00:14:41.590 "state": "configuring", 00:14:41.590 "raid_level": "concat", 00:14:41.590 "superblock": true, 00:14:41.590 "num_base_bdevs": 4, 00:14:41.590 "num_base_bdevs_discovered": 3, 00:14:41.590 "num_base_bdevs_operational": 4, 00:14:41.590 "base_bdevs_list": [ 00:14:41.590 { 00:14:41.590 "name": "BaseBdev1", 00:14:41.590 "uuid": "5b910129-42d0-11ef-96ac-773515fba644", 00:14:41.590 "is_configured": true, 00:14:41.590 "data_offset": 2048, 00:14:41.590 "data_size": 63488 00:14:41.590 }, 00:14:41.590 { 00:14:41.590 "name": "BaseBdev2", 00:14:41.590 "uuid": "5cf886f2-42d0-11ef-96ac-773515fba644", 00:14:41.590 "is_configured": true, 00:14:41.590 "data_offset": 2048, 00:14:41.590 "data_size": 63488 00:14:41.590 }, 00:14:41.590 { 00:14:41.590 "name": "BaseBdev3", 00:14:41.590 "uuid": "5dc3c879-42d0-11ef-96ac-773515fba644", 00:14:41.590 "is_configured": true, 00:14:41.590 "data_offset": 2048, 00:14:41.590 "data_size": 63488 00:14:41.590 }, 00:14:41.590 { 00:14:41.590 "name": "BaseBdev4", 00:14:41.590 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.590 "is_configured": false, 00:14:41.590 "data_offset": 0, 00:14:41.590 "data_size": 0 00:14:41.590 } 00:14:41.590 ] 00:14:41.590 }' 00:14:41.590 17:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:41.590 17:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.847 17:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:42.105 [2024-07-15 17:33:37.907928] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:42.105 [2024-07-15 17:33:37.907996] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c2ad2434a00 00:14:42.105 [2024-07-15 17:33:37.908002] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:42.105 [2024-07-15 17:33:37.908024] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c2ad2497e20 00:14:42.105 [2024-07-15 17:33:37.908080] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c2ad2434a00 00:14:42.105 [2024-07-15 17:33:37.908084] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1c2ad2434a00 00:14:42.105 [2024-07-15 17:33:37.908105] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.105 BaseBdev4 00:14:42.105 17:33:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:14:42.105 17:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:14:42.105 17:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:42.105 17:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:42.105 17:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:42.105 17:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:42.105 17:33:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:42.671 17:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:42.671 [ 00:14:42.671 { 00:14:42.671 "name": "BaseBdev4", 00:14:42.671 "aliases": [ 00:14:42.671 "5e92b3e7-42d0-11ef-96ac-773515fba644" 00:14:42.671 ], 00:14:42.671 "product_name": "Malloc disk", 00:14:42.671 "block_size": 512, 00:14:42.671 "num_blocks": 65536, 00:14:42.671 "uuid": "5e92b3e7-42d0-11ef-96ac-773515fba644", 00:14:42.671 "assigned_rate_limits": { 00:14:42.671 "rw_ios_per_sec": 0, 00:14:42.671 "rw_mbytes_per_sec": 0, 00:14:42.671 "r_mbytes_per_sec": 0, 00:14:42.671 "w_mbytes_per_sec": 0 00:14:42.671 }, 00:14:42.671 "claimed": true, 00:14:42.671 "claim_type": "exclusive_write", 00:14:42.671 "zoned": false, 00:14:42.671 "supported_io_types": { 00:14:42.671 "read": true, 00:14:42.671 "write": true, 00:14:42.671 "unmap": true, 00:14:42.671 "flush": true, 00:14:42.671 "reset": true, 00:14:42.671 "nvme_admin": false, 00:14:42.671 "nvme_io": false, 00:14:42.671 "nvme_io_md": false, 00:14:42.671 "write_zeroes": true, 00:14:42.671 "zcopy": true, 00:14:42.672 "get_zone_info": false, 00:14:42.672 "zone_management": false, 00:14:42.672 "zone_append": false, 00:14:42.672 "compare": false, 00:14:42.672 "compare_and_write": false, 00:14:42.672 "abort": true, 00:14:42.672 "seek_hole": false, 00:14:42.672 "seek_data": false, 00:14:42.672 "copy": true, 00:14:42.672 "nvme_iov_md": false 00:14:42.672 }, 00:14:42.672 "memory_domains": [ 00:14:42.672 { 00:14:42.672 "dma_device_id": "system", 00:14:42.672 "dma_device_type": 1 00:14:42.672 }, 00:14:42.672 { 00:14:42.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:42.672 "dma_device_type": 2 00:14:42.672 } 00:14:42.672 ], 00:14:42.672 "driver_specific": {} 00:14:42.672 } 00:14:42.672 ] 00:14:42.672 17:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:42.672 17:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:42.672 17:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:42.672 17:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:42.672 17:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:42.672 17:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:42.672 17:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:42.672 17:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:42.672 17:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:42.672 17:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:42.672 17:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:42.672 17:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:42.672 17:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:42.672 17:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.672 17:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:42.930 17:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:42.930 "name": "Existed_Raid", 00:14:42.930 "uuid": "5c78596a-42d0-11ef-96ac-773515fba644", 00:14:42.930 "strip_size_kb": 64, 00:14:42.930 "state": "online", 00:14:42.930 "raid_level": "concat", 00:14:42.930 "superblock": true, 00:14:42.930 "num_base_bdevs": 4, 00:14:42.930 "num_base_bdevs_discovered": 4, 00:14:42.930 "num_base_bdevs_operational": 4, 00:14:42.930 "base_bdevs_list": [ 00:14:42.930 { 00:14:42.930 "name": "BaseBdev1", 00:14:42.930 "uuid": "5b910129-42d0-11ef-96ac-773515fba644", 00:14:42.930 "is_configured": true, 00:14:42.930 "data_offset": 2048, 00:14:42.930 "data_size": 63488 00:14:42.930 }, 00:14:42.930 { 00:14:42.930 "name": "BaseBdev2", 00:14:42.930 "uuid": "5cf886f2-42d0-11ef-96ac-773515fba644", 00:14:42.930 "is_configured": true, 00:14:42.930 "data_offset": 2048, 00:14:42.930 "data_size": 63488 00:14:42.930 }, 00:14:42.930 { 00:14:42.930 "name": "BaseBdev3", 00:14:42.930 "uuid": "5dc3c879-42d0-11ef-96ac-773515fba644", 00:14:42.930 "is_configured": true, 00:14:42.930 "data_offset": 2048, 00:14:42.930 "data_size": 63488 00:14:42.930 }, 00:14:42.930 { 00:14:42.930 "name": "BaseBdev4", 00:14:42.930 "uuid": "5e92b3e7-42d0-11ef-96ac-773515fba644", 00:14:42.930 "is_configured": true, 00:14:42.930 "data_offset": 2048, 00:14:42.930 "data_size": 63488 00:14:42.930 } 00:14:42.930 ] 00:14:42.930 }' 00:14:42.930 17:33:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:42.930 17:33:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.496 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:43.496 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:43.496 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:43.496 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:43.496 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:43.496 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:43.496 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:43.496 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:43.496 [2024-07-15 17:33:39.287880] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:43.496 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:43.496 "name": "Existed_Raid", 00:14:43.496 "aliases": [ 00:14:43.496 "5c78596a-42d0-11ef-96ac-773515fba644" 00:14:43.496 ], 00:14:43.496 "product_name": "Raid Volume", 00:14:43.496 "block_size": 512, 00:14:43.496 "num_blocks": 253952, 00:14:43.496 "uuid": "5c78596a-42d0-11ef-96ac-773515fba644", 00:14:43.496 "assigned_rate_limits": { 00:14:43.496 "rw_ios_per_sec": 0, 00:14:43.496 "rw_mbytes_per_sec": 0, 00:14:43.496 "r_mbytes_per_sec": 0, 00:14:43.496 "w_mbytes_per_sec": 0 00:14:43.496 }, 00:14:43.496 "claimed": false, 00:14:43.496 "zoned": false, 00:14:43.496 "supported_io_types": { 00:14:43.496 "read": true, 00:14:43.496 "write": true, 00:14:43.496 "unmap": true, 00:14:43.496 "flush": true, 00:14:43.496 "reset": true, 00:14:43.496 "nvme_admin": false, 00:14:43.496 "nvme_io": false, 00:14:43.496 "nvme_io_md": false, 00:14:43.496 "write_zeroes": true, 00:14:43.496 "zcopy": false, 00:14:43.496 "get_zone_info": false, 00:14:43.496 "zone_management": false, 00:14:43.496 "zone_append": false, 00:14:43.496 "compare": false, 00:14:43.496 "compare_and_write": false, 00:14:43.496 "abort": false, 00:14:43.496 "seek_hole": false, 00:14:43.496 "seek_data": false, 00:14:43.496 "copy": false, 00:14:43.496 "nvme_iov_md": false 00:14:43.496 }, 00:14:43.496 "memory_domains": [ 00:14:43.496 { 00:14:43.496 "dma_device_id": "system", 00:14:43.496 "dma_device_type": 1 00:14:43.496 }, 00:14:43.496 { 00:14:43.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.496 "dma_device_type": 2 00:14:43.496 }, 00:14:43.496 { 00:14:43.496 "dma_device_id": "system", 00:14:43.496 "dma_device_type": 1 00:14:43.496 }, 00:14:43.496 { 00:14:43.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.496 "dma_device_type": 2 00:14:43.496 }, 00:14:43.496 { 00:14:43.496 "dma_device_id": "system", 00:14:43.496 "dma_device_type": 1 00:14:43.496 }, 00:14:43.496 { 00:14:43.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.496 "dma_device_type": 2 00:14:43.496 }, 00:14:43.496 { 00:14:43.496 "dma_device_id": "system", 00:14:43.496 "dma_device_type": 1 00:14:43.496 }, 00:14:43.496 { 00:14:43.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.496 "dma_device_type": 2 00:14:43.496 } 00:14:43.496 ], 00:14:43.496 "driver_specific": { 00:14:43.496 "raid": { 00:14:43.496 "uuid": "5c78596a-42d0-11ef-96ac-773515fba644", 00:14:43.496 "strip_size_kb": 64, 00:14:43.496 "state": "online", 00:14:43.496 "raid_level": "concat", 00:14:43.496 "superblock": true, 00:14:43.496 "num_base_bdevs": 4, 00:14:43.496 "num_base_bdevs_discovered": 4, 00:14:43.496 "num_base_bdevs_operational": 4, 00:14:43.496 "base_bdevs_list": [ 00:14:43.496 { 00:14:43.496 "name": "BaseBdev1", 00:14:43.496 "uuid": "5b910129-42d0-11ef-96ac-773515fba644", 00:14:43.496 "is_configured": true, 00:14:43.496 "data_offset": 2048, 00:14:43.496 "data_size": 63488 00:14:43.496 }, 00:14:43.496 { 00:14:43.496 "name": "BaseBdev2", 00:14:43.496 "uuid": "5cf886f2-42d0-11ef-96ac-773515fba644", 00:14:43.496 "is_configured": true, 00:14:43.496 "data_offset": 2048, 00:14:43.496 "data_size": 63488 00:14:43.496 }, 00:14:43.496 { 00:14:43.496 "name": "BaseBdev3", 00:14:43.496 "uuid": "5dc3c879-42d0-11ef-96ac-773515fba644", 00:14:43.496 "is_configured": true, 00:14:43.496 "data_offset": 2048, 00:14:43.496 "data_size": 63488 00:14:43.496 }, 00:14:43.496 { 00:14:43.496 "name": "BaseBdev4", 00:14:43.496 "uuid": "5e92b3e7-42d0-11ef-96ac-773515fba644", 00:14:43.496 "is_configured": true, 00:14:43.496 "data_offset": 2048, 00:14:43.496 "data_size": 63488 00:14:43.496 } 00:14:43.496 ] 00:14:43.496 } 00:14:43.496 } 00:14:43.496 }' 00:14:43.496 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:43.496 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:43.496 BaseBdev2 00:14:43.496 BaseBdev3 00:14:43.497 BaseBdev4' 00:14:43.497 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:43.497 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:43.497 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:43.755 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:43.755 "name": "BaseBdev1", 00:14:43.755 "aliases": [ 00:14:43.755 "5b910129-42d0-11ef-96ac-773515fba644" 00:14:43.755 ], 00:14:43.755 "product_name": "Malloc disk", 00:14:43.755 "block_size": 512, 00:14:43.755 "num_blocks": 65536, 00:14:43.755 "uuid": "5b910129-42d0-11ef-96ac-773515fba644", 00:14:43.755 "assigned_rate_limits": { 00:14:43.755 "rw_ios_per_sec": 0, 00:14:43.755 "rw_mbytes_per_sec": 0, 00:14:43.755 "r_mbytes_per_sec": 0, 00:14:43.755 "w_mbytes_per_sec": 0 00:14:43.755 }, 00:14:43.755 "claimed": true, 00:14:43.755 "claim_type": "exclusive_write", 00:14:43.755 "zoned": false, 00:14:43.755 "supported_io_types": { 00:14:43.755 "read": true, 00:14:43.755 "write": true, 00:14:43.755 "unmap": true, 00:14:43.755 "flush": true, 00:14:43.755 "reset": true, 00:14:43.755 "nvme_admin": false, 00:14:43.755 "nvme_io": false, 00:14:43.755 "nvme_io_md": false, 00:14:43.755 "write_zeroes": true, 00:14:43.755 "zcopy": true, 00:14:43.755 "get_zone_info": false, 00:14:43.755 "zone_management": false, 00:14:43.755 "zone_append": false, 00:14:43.755 "compare": false, 00:14:43.755 "compare_and_write": false, 00:14:43.755 "abort": true, 00:14:43.755 "seek_hole": false, 00:14:43.755 "seek_data": false, 00:14:43.755 "copy": true, 00:14:43.755 "nvme_iov_md": false 00:14:43.755 }, 00:14:43.755 "memory_domains": [ 00:14:43.755 { 00:14:43.755 "dma_device_id": "system", 00:14:43.755 "dma_device_type": 1 00:14:43.755 }, 00:14:43.755 { 00:14:43.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:43.755 "dma_device_type": 2 00:14:43.755 } 00:14:43.755 ], 00:14:43.755 "driver_specific": {} 00:14:43.755 }' 00:14:43.755 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:43.755 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:43.755 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:43.755 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:43.755 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:43.755 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:43.755 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:43.755 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:44.013 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:44.013 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:44.013 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:44.013 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:44.013 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:44.013 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:44.013 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:44.271 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:44.271 "name": "BaseBdev2", 00:14:44.271 "aliases": [ 00:14:44.271 "5cf886f2-42d0-11ef-96ac-773515fba644" 00:14:44.271 ], 00:14:44.271 "product_name": "Malloc disk", 00:14:44.271 "block_size": 512, 00:14:44.271 "num_blocks": 65536, 00:14:44.271 "uuid": "5cf886f2-42d0-11ef-96ac-773515fba644", 00:14:44.271 "assigned_rate_limits": { 00:14:44.271 "rw_ios_per_sec": 0, 00:14:44.271 "rw_mbytes_per_sec": 0, 00:14:44.271 "r_mbytes_per_sec": 0, 00:14:44.271 "w_mbytes_per_sec": 0 00:14:44.271 }, 00:14:44.271 "claimed": true, 00:14:44.271 "claim_type": "exclusive_write", 00:14:44.271 "zoned": false, 00:14:44.271 "supported_io_types": { 00:14:44.271 "read": true, 00:14:44.271 "write": true, 00:14:44.271 "unmap": true, 00:14:44.271 "flush": true, 00:14:44.271 "reset": true, 00:14:44.271 "nvme_admin": false, 00:14:44.271 "nvme_io": false, 00:14:44.271 "nvme_io_md": false, 00:14:44.271 "write_zeroes": true, 00:14:44.271 "zcopy": true, 00:14:44.271 "get_zone_info": false, 00:14:44.271 "zone_management": false, 00:14:44.271 "zone_append": false, 00:14:44.271 "compare": false, 00:14:44.271 "compare_and_write": false, 00:14:44.271 "abort": true, 00:14:44.271 "seek_hole": false, 00:14:44.271 "seek_data": false, 00:14:44.271 "copy": true, 00:14:44.271 "nvme_iov_md": false 00:14:44.271 }, 00:14:44.271 "memory_domains": [ 00:14:44.271 { 00:14:44.271 "dma_device_id": "system", 00:14:44.271 "dma_device_type": 1 00:14:44.271 }, 00:14:44.271 { 00:14:44.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.271 "dma_device_type": 2 00:14:44.271 } 00:14:44.271 ], 00:14:44.271 "driver_specific": {} 00:14:44.271 }' 00:14:44.271 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:44.271 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:44.271 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:44.271 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:44.271 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:44.271 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:44.271 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:44.271 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:44.271 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:44.271 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:44.271 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:44.271 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:44.271 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:44.271 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:44.271 17:33:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:44.530 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:44.530 "name": "BaseBdev3", 00:14:44.530 "aliases": [ 00:14:44.530 "5dc3c879-42d0-11ef-96ac-773515fba644" 00:14:44.530 ], 00:14:44.530 "product_name": "Malloc disk", 00:14:44.530 "block_size": 512, 00:14:44.530 "num_blocks": 65536, 00:14:44.530 "uuid": "5dc3c879-42d0-11ef-96ac-773515fba644", 00:14:44.530 "assigned_rate_limits": { 00:14:44.530 "rw_ios_per_sec": 0, 00:14:44.530 "rw_mbytes_per_sec": 0, 00:14:44.530 "r_mbytes_per_sec": 0, 00:14:44.530 "w_mbytes_per_sec": 0 00:14:44.530 }, 00:14:44.530 "claimed": true, 00:14:44.530 "claim_type": "exclusive_write", 00:14:44.530 "zoned": false, 00:14:44.530 "supported_io_types": { 00:14:44.530 "read": true, 00:14:44.530 "write": true, 00:14:44.530 "unmap": true, 00:14:44.530 "flush": true, 00:14:44.530 "reset": true, 00:14:44.530 "nvme_admin": false, 00:14:44.530 "nvme_io": false, 00:14:44.530 "nvme_io_md": false, 00:14:44.530 "write_zeroes": true, 00:14:44.530 "zcopy": true, 00:14:44.530 "get_zone_info": false, 00:14:44.530 "zone_management": false, 00:14:44.530 "zone_append": false, 00:14:44.530 "compare": false, 00:14:44.530 "compare_and_write": false, 00:14:44.530 "abort": true, 00:14:44.530 "seek_hole": false, 00:14:44.530 "seek_data": false, 00:14:44.530 "copy": true, 00:14:44.530 "nvme_iov_md": false 00:14:44.530 }, 00:14:44.530 "memory_domains": [ 00:14:44.530 { 00:14:44.530 "dma_device_id": "system", 00:14:44.530 "dma_device_type": 1 00:14:44.530 }, 00:14:44.530 { 00:14:44.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.530 "dma_device_type": 2 00:14:44.530 } 00:14:44.530 ], 00:14:44.530 "driver_specific": {} 00:14:44.530 }' 00:14:44.530 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:44.530 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:44.530 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:44.530 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:44.530 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:44.530 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:44.530 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:44.530 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:44.530 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:44.530 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:44.530 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:44.530 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:44.530 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:44.530 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:14:44.530 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:44.788 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:44.788 "name": "BaseBdev4", 00:14:44.788 "aliases": [ 00:14:44.788 "5e92b3e7-42d0-11ef-96ac-773515fba644" 00:14:44.788 ], 00:14:44.788 "product_name": "Malloc disk", 00:14:44.788 "block_size": 512, 00:14:44.788 "num_blocks": 65536, 00:14:44.788 "uuid": "5e92b3e7-42d0-11ef-96ac-773515fba644", 00:14:44.788 "assigned_rate_limits": { 00:14:44.788 "rw_ios_per_sec": 0, 00:14:44.788 "rw_mbytes_per_sec": 0, 00:14:44.788 "r_mbytes_per_sec": 0, 00:14:44.788 "w_mbytes_per_sec": 0 00:14:44.788 }, 00:14:44.788 "claimed": true, 00:14:44.788 "claim_type": "exclusive_write", 00:14:44.788 "zoned": false, 00:14:44.788 "supported_io_types": { 00:14:44.788 "read": true, 00:14:44.788 "write": true, 00:14:44.788 "unmap": true, 00:14:44.788 "flush": true, 00:14:44.788 "reset": true, 00:14:44.788 "nvme_admin": false, 00:14:44.788 "nvme_io": false, 00:14:44.788 "nvme_io_md": false, 00:14:44.788 "write_zeroes": true, 00:14:44.788 "zcopy": true, 00:14:44.788 "get_zone_info": false, 00:14:44.788 "zone_management": false, 00:14:44.788 "zone_append": false, 00:14:44.788 "compare": false, 00:14:44.788 "compare_and_write": false, 00:14:44.788 "abort": true, 00:14:44.788 "seek_hole": false, 00:14:44.788 "seek_data": false, 00:14:44.788 "copy": true, 00:14:44.788 "nvme_iov_md": false 00:14:44.788 }, 00:14:44.788 "memory_domains": [ 00:14:44.788 { 00:14:44.788 "dma_device_id": "system", 00:14:44.788 "dma_device_type": 1 00:14:44.788 }, 00:14:44.788 { 00:14:44.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.788 "dma_device_type": 2 00:14:44.788 } 00:14:44.788 ], 00:14:44.788 "driver_specific": {} 00:14:44.788 }' 00:14:44.788 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:44.788 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:44.788 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:44.788 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:44.788 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:44.788 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:44.789 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:44.789 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:44.789 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:44.789 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:44.789 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:44.789 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:44.789 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:45.046 [2024-07-15 17:33:40.871868] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:45.046 [2024-07-15 17:33:40.871890] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:45.046 [2024-07-15 17:33:40.871904] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:45.303 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:45.303 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:14:45.303 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:45.303 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:14:45.303 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:45.303 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:14:45.303 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:45.303 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:45.303 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:45.303 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:45.303 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:45.303 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:45.303 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:45.303 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:45.303 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:45.303 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.303 17:33:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:45.560 17:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:45.560 "name": "Existed_Raid", 00:14:45.560 "uuid": "5c78596a-42d0-11ef-96ac-773515fba644", 00:14:45.560 "strip_size_kb": 64, 00:14:45.560 "state": "offline", 00:14:45.560 "raid_level": "concat", 00:14:45.560 "superblock": true, 00:14:45.560 "num_base_bdevs": 4, 00:14:45.560 "num_base_bdevs_discovered": 3, 00:14:45.560 "num_base_bdevs_operational": 3, 00:14:45.560 "base_bdevs_list": [ 00:14:45.560 { 00:14:45.560 "name": null, 00:14:45.560 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.560 "is_configured": false, 00:14:45.560 "data_offset": 2048, 00:14:45.560 "data_size": 63488 00:14:45.560 }, 00:14:45.560 { 00:14:45.560 "name": "BaseBdev2", 00:14:45.560 "uuid": "5cf886f2-42d0-11ef-96ac-773515fba644", 00:14:45.560 "is_configured": true, 00:14:45.560 "data_offset": 2048, 00:14:45.560 "data_size": 63488 00:14:45.560 }, 00:14:45.560 { 00:14:45.560 "name": "BaseBdev3", 00:14:45.560 "uuid": "5dc3c879-42d0-11ef-96ac-773515fba644", 00:14:45.560 "is_configured": true, 00:14:45.560 "data_offset": 2048, 00:14:45.560 "data_size": 63488 00:14:45.560 }, 00:14:45.560 { 00:14:45.560 "name": "BaseBdev4", 00:14:45.560 "uuid": "5e92b3e7-42d0-11ef-96ac-773515fba644", 00:14:45.560 "is_configured": true, 00:14:45.560 "data_offset": 2048, 00:14:45.560 "data_size": 63488 00:14:45.560 } 00:14:45.560 ] 00:14:45.560 }' 00:14:45.560 17:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:45.560 17:33:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.817 17:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:45.817 17:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:45.817 17:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:45.817 17:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:46.075 17:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:46.075 17:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:46.075 17:33:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:46.334 [2024-07-15 17:33:42.017677] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:46.334 17:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:46.334 17:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:46.334 17:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.334 17:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:46.669 17:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:46.669 17:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:46.669 17:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:46.926 [2024-07-15 17:33:42.527387] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:46.926 17:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:46.926 17:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:46.926 17:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.926 17:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:47.184 17:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:47.184 17:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:47.184 17:33:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:14:47.441 [2024-07-15 17:33:43.033089] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:47.441 [2024-07-15 17:33:43.033117] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c2ad2434a00 name Existed_Raid, state offline 00:14:47.441 17:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:47.441 17:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:47.441 17:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.441 17:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:47.698 17:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:47.699 17:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:47.699 17:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:14:47.699 17:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:14:47.699 17:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:47.699 17:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:47.955 BaseBdev2 00:14:47.955 17:33:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:14:47.955 17:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:47.955 17:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:47.955 17:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:47.955 17:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:47.955 17:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:47.955 17:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:47.955 17:33:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:48.520 [ 00:14:48.520 { 00:14:48.520 "name": "BaseBdev2", 00:14:48.520 "aliases": [ 00:14:48.520 "61ecb7f6-42d0-11ef-96ac-773515fba644" 00:14:48.520 ], 00:14:48.520 "product_name": "Malloc disk", 00:14:48.520 "block_size": 512, 00:14:48.520 "num_blocks": 65536, 00:14:48.520 "uuid": "61ecb7f6-42d0-11ef-96ac-773515fba644", 00:14:48.520 "assigned_rate_limits": { 00:14:48.520 "rw_ios_per_sec": 0, 00:14:48.520 "rw_mbytes_per_sec": 0, 00:14:48.520 "r_mbytes_per_sec": 0, 00:14:48.520 "w_mbytes_per_sec": 0 00:14:48.520 }, 00:14:48.520 "claimed": false, 00:14:48.520 "zoned": false, 00:14:48.520 "supported_io_types": { 00:14:48.520 "read": true, 00:14:48.520 "write": true, 00:14:48.520 "unmap": true, 00:14:48.520 "flush": true, 00:14:48.520 "reset": true, 00:14:48.520 "nvme_admin": false, 00:14:48.520 "nvme_io": false, 00:14:48.520 "nvme_io_md": false, 00:14:48.520 "write_zeroes": true, 00:14:48.520 "zcopy": true, 00:14:48.520 "get_zone_info": false, 00:14:48.520 "zone_management": false, 00:14:48.520 "zone_append": false, 00:14:48.520 "compare": false, 00:14:48.520 "compare_and_write": false, 00:14:48.520 "abort": true, 00:14:48.520 "seek_hole": false, 00:14:48.520 "seek_data": false, 00:14:48.520 "copy": true, 00:14:48.520 "nvme_iov_md": false 00:14:48.520 }, 00:14:48.520 "memory_domains": [ 00:14:48.520 { 00:14:48.520 "dma_device_id": "system", 00:14:48.520 "dma_device_type": 1 00:14:48.520 }, 00:14:48.520 { 00:14:48.520 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.520 "dma_device_type": 2 00:14:48.520 } 00:14:48.520 ], 00:14:48.520 "driver_specific": {} 00:14:48.520 } 00:14:48.520 ] 00:14:48.520 17:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:48.520 17:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:48.520 17:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:48.520 17:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:48.520 BaseBdev3 00:14:48.520 17:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:14:48.520 17:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:48.520 17:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:48.520 17:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:48.520 17:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:48.520 17:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:48.520 17:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:49.085 17:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:49.085 [ 00:14:49.085 { 00:14:49.085 "name": "BaseBdev3", 00:14:49.085 "aliases": [ 00:14:49.085 "62659203-42d0-11ef-96ac-773515fba644" 00:14:49.085 ], 00:14:49.085 "product_name": "Malloc disk", 00:14:49.085 "block_size": 512, 00:14:49.085 "num_blocks": 65536, 00:14:49.085 "uuid": "62659203-42d0-11ef-96ac-773515fba644", 00:14:49.085 "assigned_rate_limits": { 00:14:49.085 "rw_ios_per_sec": 0, 00:14:49.085 "rw_mbytes_per_sec": 0, 00:14:49.085 "r_mbytes_per_sec": 0, 00:14:49.085 "w_mbytes_per_sec": 0 00:14:49.085 }, 00:14:49.085 "claimed": false, 00:14:49.085 "zoned": false, 00:14:49.085 "supported_io_types": { 00:14:49.085 "read": true, 00:14:49.085 "write": true, 00:14:49.085 "unmap": true, 00:14:49.085 "flush": true, 00:14:49.085 "reset": true, 00:14:49.085 "nvme_admin": false, 00:14:49.085 "nvme_io": false, 00:14:49.085 "nvme_io_md": false, 00:14:49.085 "write_zeroes": true, 00:14:49.085 "zcopy": true, 00:14:49.085 "get_zone_info": false, 00:14:49.085 "zone_management": false, 00:14:49.085 "zone_append": false, 00:14:49.085 "compare": false, 00:14:49.085 "compare_and_write": false, 00:14:49.085 "abort": true, 00:14:49.085 "seek_hole": false, 00:14:49.085 "seek_data": false, 00:14:49.085 "copy": true, 00:14:49.085 "nvme_iov_md": false 00:14:49.085 }, 00:14:49.085 "memory_domains": [ 00:14:49.085 { 00:14:49.085 "dma_device_id": "system", 00:14:49.085 "dma_device_type": 1 00:14:49.085 }, 00:14:49.085 { 00:14:49.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.085 "dma_device_type": 2 00:14:49.085 } 00:14:49.085 ], 00:14:49.085 "driver_specific": {} 00:14:49.085 } 00:14:49.085 ] 00:14:49.085 17:33:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:49.085 17:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:49.085 17:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:49.085 17:33:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:49.343 BaseBdev4 00:14:49.343 17:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:14:49.343 17:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:14:49.343 17:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:49.343 17:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:49.343 17:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:49.343 17:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:49.343 17:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:49.600 17:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:49.857 [ 00:14:49.857 { 00:14:49.857 "name": "BaseBdev4", 00:14:49.857 "aliases": [ 00:14:49.857 "62dbfaad-42d0-11ef-96ac-773515fba644" 00:14:49.857 ], 00:14:49.857 "product_name": "Malloc disk", 00:14:49.857 "block_size": 512, 00:14:49.857 "num_blocks": 65536, 00:14:49.857 "uuid": "62dbfaad-42d0-11ef-96ac-773515fba644", 00:14:49.857 "assigned_rate_limits": { 00:14:49.857 "rw_ios_per_sec": 0, 00:14:49.857 "rw_mbytes_per_sec": 0, 00:14:49.857 "r_mbytes_per_sec": 0, 00:14:49.857 "w_mbytes_per_sec": 0 00:14:49.857 }, 00:14:49.857 "claimed": false, 00:14:49.857 "zoned": false, 00:14:49.857 "supported_io_types": { 00:14:49.857 "read": true, 00:14:49.857 "write": true, 00:14:49.857 "unmap": true, 00:14:49.857 "flush": true, 00:14:49.857 "reset": true, 00:14:49.857 "nvme_admin": false, 00:14:49.857 "nvme_io": false, 00:14:49.857 "nvme_io_md": false, 00:14:49.857 "write_zeroes": true, 00:14:49.857 "zcopy": true, 00:14:49.857 "get_zone_info": false, 00:14:49.857 "zone_management": false, 00:14:49.857 "zone_append": false, 00:14:49.857 "compare": false, 00:14:49.857 "compare_and_write": false, 00:14:49.857 "abort": true, 00:14:49.857 "seek_hole": false, 00:14:49.857 "seek_data": false, 00:14:49.857 "copy": true, 00:14:49.857 "nvme_iov_md": false 00:14:49.857 }, 00:14:49.857 "memory_domains": [ 00:14:49.857 { 00:14:49.857 "dma_device_id": "system", 00:14:49.857 "dma_device_type": 1 00:14:49.857 }, 00:14:49.857 { 00:14:49.857 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:49.857 "dma_device_type": 2 00:14:49.857 } 00:14:49.857 ], 00:14:49.857 "driver_specific": {} 00:14:49.857 } 00:14:49.857 ] 00:14:49.857 17:33:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:49.857 17:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:49.857 17:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:49.857 17:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:50.114 [2024-07-15 17:33:45.914943] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:50.115 [2024-07-15 17:33:45.914993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:50.115 [2024-07-15 17:33:45.915002] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:50.115 [2024-07-15 17:33:45.915562] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:50.115 [2024-07-15 17:33:45.915592] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:50.115 17:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:50.115 17:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:50.115 17:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:50.115 17:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:50.115 17:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:50.115 17:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:50.115 17:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:50.115 17:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:50.115 17:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:50.115 17:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:50.115 17:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:50.115 17:33:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.435 17:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:50.435 "name": "Existed_Raid", 00:14:50.435 "uuid": "63587e39-42d0-11ef-96ac-773515fba644", 00:14:50.435 "strip_size_kb": 64, 00:14:50.435 "state": "configuring", 00:14:50.435 "raid_level": "concat", 00:14:50.435 "superblock": true, 00:14:50.435 "num_base_bdevs": 4, 00:14:50.435 "num_base_bdevs_discovered": 3, 00:14:50.435 "num_base_bdevs_operational": 4, 00:14:50.435 "base_bdevs_list": [ 00:14:50.435 { 00:14:50.435 "name": "BaseBdev1", 00:14:50.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.435 "is_configured": false, 00:14:50.435 "data_offset": 0, 00:14:50.435 "data_size": 0 00:14:50.435 }, 00:14:50.435 { 00:14:50.435 "name": "BaseBdev2", 00:14:50.435 "uuid": "61ecb7f6-42d0-11ef-96ac-773515fba644", 00:14:50.435 "is_configured": true, 00:14:50.435 "data_offset": 2048, 00:14:50.435 "data_size": 63488 00:14:50.435 }, 00:14:50.435 { 00:14:50.435 "name": "BaseBdev3", 00:14:50.435 "uuid": "62659203-42d0-11ef-96ac-773515fba644", 00:14:50.435 "is_configured": true, 00:14:50.435 "data_offset": 2048, 00:14:50.435 "data_size": 63488 00:14:50.435 }, 00:14:50.435 { 00:14:50.435 "name": "BaseBdev4", 00:14:50.435 "uuid": "62dbfaad-42d0-11ef-96ac-773515fba644", 00:14:50.435 "is_configured": true, 00:14:50.435 "data_offset": 2048, 00:14:50.435 "data_size": 63488 00:14:50.435 } 00:14:50.435 ] 00:14:50.435 }' 00:14:50.435 17:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:50.435 17:33:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.694 17:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:51.259 [2024-07-15 17:33:46.794950] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:51.259 17:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:51.259 17:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:51.259 17:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:51.259 17:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:51.259 17:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:51.259 17:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:51.259 17:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:51.259 17:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:51.259 17:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:51.259 17:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:51.259 17:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.259 17:33:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.515 17:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:51.515 "name": "Existed_Raid", 00:14:51.515 "uuid": "63587e39-42d0-11ef-96ac-773515fba644", 00:14:51.515 "strip_size_kb": 64, 00:14:51.515 "state": "configuring", 00:14:51.515 "raid_level": "concat", 00:14:51.515 "superblock": true, 00:14:51.515 "num_base_bdevs": 4, 00:14:51.515 "num_base_bdevs_discovered": 2, 00:14:51.515 "num_base_bdevs_operational": 4, 00:14:51.515 "base_bdevs_list": [ 00:14:51.515 { 00:14:51.515 "name": "BaseBdev1", 00:14:51.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.515 "is_configured": false, 00:14:51.515 "data_offset": 0, 00:14:51.515 "data_size": 0 00:14:51.515 }, 00:14:51.515 { 00:14:51.515 "name": null, 00:14:51.515 "uuid": "61ecb7f6-42d0-11ef-96ac-773515fba644", 00:14:51.515 "is_configured": false, 00:14:51.515 "data_offset": 2048, 00:14:51.515 "data_size": 63488 00:14:51.515 }, 00:14:51.515 { 00:14:51.515 "name": "BaseBdev3", 00:14:51.515 "uuid": "62659203-42d0-11ef-96ac-773515fba644", 00:14:51.515 "is_configured": true, 00:14:51.515 "data_offset": 2048, 00:14:51.515 "data_size": 63488 00:14:51.515 }, 00:14:51.515 { 00:14:51.515 "name": "BaseBdev4", 00:14:51.515 "uuid": "62dbfaad-42d0-11ef-96ac-773515fba644", 00:14:51.515 "is_configured": true, 00:14:51.515 "data_offset": 2048, 00:14:51.515 "data_size": 63488 00:14:51.515 } 00:14:51.515 ] 00:14:51.515 }' 00:14:51.515 17:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:51.515 17:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.772 17:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.772 17:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:52.029 17:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:14:52.029 17:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:52.029 [2024-07-15 17:33:47.835095] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:52.029 BaseBdev1 00:14:52.029 17:33:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:14:52.029 17:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:52.029 17:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:52.029 17:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:52.029 17:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:52.029 17:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:52.029 17:33:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:52.286 17:33:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:52.850 [ 00:14:52.850 { 00:14:52.850 "name": "BaseBdev1", 00:14:52.850 "aliases": [ 00:14:52.850 "647d77ae-42d0-11ef-96ac-773515fba644" 00:14:52.850 ], 00:14:52.850 "product_name": "Malloc disk", 00:14:52.850 "block_size": 512, 00:14:52.850 "num_blocks": 65536, 00:14:52.850 "uuid": "647d77ae-42d0-11ef-96ac-773515fba644", 00:14:52.850 "assigned_rate_limits": { 00:14:52.850 "rw_ios_per_sec": 0, 00:14:52.850 "rw_mbytes_per_sec": 0, 00:14:52.850 "r_mbytes_per_sec": 0, 00:14:52.850 "w_mbytes_per_sec": 0 00:14:52.850 }, 00:14:52.850 "claimed": true, 00:14:52.850 "claim_type": "exclusive_write", 00:14:52.850 "zoned": false, 00:14:52.850 "supported_io_types": { 00:14:52.850 "read": true, 00:14:52.850 "write": true, 00:14:52.850 "unmap": true, 00:14:52.850 "flush": true, 00:14:52.850 "reset": true, 00:14:52.850 "nvme_admin": false, 00:14:52.850 "nvme_io": false, 00:14:52.850 "nvme_io_md": false, 00:14:52.850 "write_zeroes": true, 00:14:52.850 "zcopy": true, 00:14:52.850 "get_zone_info": false, 00:14:52.850 "zone_management": false, 00:14:52.850 "zone_append": false, 00:14:52.850 "compare": false, 00:14:52.850 "compare_and_write": false, 00:14:52.850 "abort": true, 00:14:52.850 "seek_hole": false, 00:14:52.850 "seek_data": false, 00:14:52.850 "copy": true, 00:14:52.850 "nvme_iov_md": false 00:14:52.850 }, 00:14:52.850 "memory_domains": [ 00:14:52.851 { 00:14:52.851 "dma_device_id": "system", 00:14:52.851 "dma_device_type": 1 00:14:52.851 }, 00:14:52.851 { 00:14:52.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:52.851 "dma_device_type": 2 00:14:52.851 } 00:14:52.851 ], 00:14:52.851 "driver_specific": {} 00:14:52.851 } 00:14:52.851 ] 00:14:52.851 17:33:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:52.851 17:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:52.851 17:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:52.851 17:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:52.851 17:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:52.851 17:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:52.851 17:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:52.851 17:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:52.851 17:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:52.851 17:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:52.851 17:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:52.851 17:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:52.851 17:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.851 17:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:52.851 "name": "Existed_Raid", 00:14:52.851 "uuid": "63587e39-42d0-11ef-96ac-773515fba644", 00:14:52.851 "strip_size_kb": 64, 00:14:52.851 "state": "configuring", 00:14:52.851 "raid_level": "concat", 00:14:52.851 "superblock": true, 00:14:52.851 "num_base_bdevs": 4, 00:14:52.851 "num_base_bdevs_discovered": 3, 00:14:52.851 "num_base_bdevs_operational": 4, 00:14:52.851 "base_bdevs_list": [ 00:14:52.851 { 00:14:52.851 "name": "BaseBdev1", 00:14:52.851 "uuid": "647d77ae-42d0-11ef-96ac-773515fba644", 00:14:52.851 "is_configured": true, 00:14:52.851 "data_offset": 2048, 00:14:52.851 "data_size": 63488 00:14:52.851 }, 00:14:52.851 { 00:14:52.851 "name": null, 00:14:52.851 "uuid": "61ecb7f6-42d0-11ef-96ac-773515fba644", 00:14:52.851 "is_configured": false, 00:14:52.851 "data_offset": 2048, 00:14:52.851 "data_size": 63488 00:14:52.851 }, 00:14:52.851 { 00:14:52.851 "name": "BaseBdev3", 00:14:52.851 "uuid": "62659203-42d0-11ef-96ac-773515fba644", 00:14:52.851 "is_configured": true, 00:14:52.851 "data_offset": 2048, 00:14:52.851 "data_size": 63488 00:14:52.851 }, 00:14:52.851 { 00:14:52.851 "name": "BaseBdev4", 00:14:52.851 "uuid": "62dbfaad-42d0-11ef-96ac-773515fba644", 00:14:52.851 "is_configured": true, 00:14:52.851 "data_offset": 2048, 00:14:52.851 "data_size": 63488 00:14:52.851 } 00:14:52.851 ] 00:14:52.851 }' 00:14:52.851 17:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:52.851 17:33:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.108 17:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.108 17:33:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:53.366 17:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:14:53.366 17:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:53.623 [2024-07-15 17:33:49.362990] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:53.623 17:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:53.623 17:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:53.623 17:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:53.623 17:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:53.623 17:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:53.623 17:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:53.623 17:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:53.623 17:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:53.623 17:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:53.623 17:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:53.623 17:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:53.623 17:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.881 17:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:53.881 "name": "Existed_Raid", 00:14:53.881 "uuid": "63587e39-42d0-11ef-96ac-773515fba644", 00:14:53.881 "strip_size_kb": 64, 00:14:53.881 "state": "configuring", 00:14:53.881 "raid_level": "concat", 00:14:53.881 "superblock": true, 00:14:53.881 "num_base_bdevs": 4, 00:14:53.881 "num_base_bdevs_discovered": 2, 00:14:53.881 "num_base_bdevs_operational": 4, 00:14:53.881 "base_bdevs_list": [ 00:14:53.881 { 00:14:53.881 "name": "BaseBdev1", 00:14:53.881 "uuid": "647d77ae-42d0-11ef-96ac-773515fba644", 00:14:53.881 "is_configured": true, 00:14:53.881 "data_offset": 2048, 00:14:53.881 "data_size": 63488 00:14:53.881 }, 00:14:53.881 { 00:14:53.881 "name": null, 00:14:53.881 "uuid": "61ecb7f6-42d0-11ef-96ac-773515fba644", 00:14:53.881 "is_configured": false, 00:14:53.881 "data_offset": 2048, 00:14:53.881 "data_size": 63488 00:14:53.881 }, 00:14:53.881 { 00:14:53.881 "name": null, 00:14:53.881 "uuid": "62659203-42d0-11ef-96ac-773515fba644", 00:14:53.881 "is_configured": false, 00:14:53.881 "data_offset": 2048, 00:14:53.881 "data_size": 63488 00:14:53.881 }, 00:14:53.881 { 00:14:53.881 "name": "BaseBdev4", 00:14:53.881 "uuid": "62dbfaad-42d0-11ef-96ac-773515fba644", 00:14:53.881 "is_configured": true, 00:14:53.881 "data_offset": 2048, 00:14:53.881 "data_size": 63488 00:14:53.881 } 00:14:53.881 ] 00:14:53.881 }' 00:14:53.881 17:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:53.881 17:33:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.198 17:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.198 17:33:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:54.456 17:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:14:54.456 17:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:54.713 [2024-07-15 17:33:50.479045] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:54.713 17:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:54.713 17:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:54.713 17:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:54.713 17:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:54.713 17:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:54.713 17:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:54.713 17:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:54.713 17:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:54.713 17:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:54.713 17:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:54.713 17:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.713 17:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.970 17:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:54.970 "name": "Existed_Raid", 00:14:54.970 "uuid": "63587e39-42d0-11ef-96ac-773515fba644", 00:14:54.970 "strip_size_kb": 64, 00:14:54.970 "state": "configuring", 00:14:54.970 "raid_level": "concat", 00:14:54.970 "superblock": true, 00:14:54.970 "num_base_bdevs": 4, 00:14:54.970 "num_base_bdevs_discovered": 3, 00:14:54.970 "num_base_bdevs_operational": 4, 00:14:54.970 "base_bdevs_list": [ 00:14:54.970 { 00:14:54.970 "name": "BaseBdev1", 00:14:54.970 "uuid": "647d77ae-42d0-11ef-96ac-773515fba644", 00:14:54.970 "is_configured": true, 00:14:54.970 "data_offset": 2048, 00:14:54.970 "data_size": 63488 00:14:54.970 }, 00:14:54.970 { 00:14:54.970 "name": null, 00:14:54.970 "uuid": "61ecb7f6-42d0-11ef-96ac-773515fba644", 00:14:54.970 "is_configured": false, 00:14:54.970 "data_offset": 2048, 00:14:54.970 "data_size": 63488 00:14:54.970 }, 00:14:54.970 { 00:14:54.970 "name": "BaseBdev3", 00:14:54.970 "uuid": "62659203-42d0-11ef-96ac-773515fba644", 00:14:54.970 "is_configured": true, 00:14:54.970 "data_offset": 2048, 00:14:54.970 "data_size": 63488 00:14:54.970 }, 00:14:54.970 { 00:14:54.970 "name": "BaseBdev4", 00:14:54.970 "uuid": "62dbfaad-42d0-11ef-96ac-773515fba644", 00:14:54.970 "is_configured": true, 00:14:54.970 "data_offset": 2048, 00:14:54.970 "data_size": 63488 00:14:54.970 } 00:14:54.970 ] 00:14:54.970 }' 00:14:54.970 17:33:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:54.970 17:33:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.534 17:33:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.534 17:33:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:55.791 17:33:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:14:55.791 17:33:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:56.048 [2024-07-15 17:33:51.623151] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:56.048 17:33:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:56.048 17:33:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:56.048 17:33:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:56.048 17:33:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:56.048 17:33:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:56.048 17:33:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:56.048 17:33:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:56.048 17:33:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:56.048 17:33:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:56.048 17:33:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:56.048 17:33:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.048 17:33:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:56.306 17:33:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:56.306 "name": "Existed_Raid", 00:14:56.306 "uuid": "63587e39-42d0-11ef-96ac-773515fba644", 00:14:56.306 "strip_size_kb": 64, 00:14:56.306 "state": "configuring", 00:14:56.306 "raid_level": "concat", 00:14:56.306 "superblock": true, 00:14:56.306 "num_base_bdevs": 4, 00:14:56.306 "num_base_bdevs_discovered": 2, 00:14:56.306 "num_base_bdevs_operational": 4, 00:14:56.306 "base_bdevs_list": [ 00:14:56.306 { 00:14:56.306 "name": null, 00:14:56.306 "uuid": "647d77ae-42d0-11ef-96ac-773515fba644", 00:14:56.306 "is_configured": false, 00:14:56.306 "data_offset": 2048, 00:14:56.306 "data_size": 63488 00:14:56.306 }, 00:14:56.306 { 00:14:56.306 "name": null, 00:14:56.306 "uuid": "61ecb7f6-42d0-11ef-96ac-773515fba644", 00:14:56.306 "is_configured": false, 00:14:56.306 "data_offset": 2048, 00:14:56.306 "data_size": 63488 00:14:56.306 }, 00:14:56.306 { 00:14:56.306 "name": "BaseBdev3", 00:14:56.306 "uuid": "62659203-42d0-11ef-96ac-773515fba644", 00:14:56.306 "is_configured": true, 00:14:56.306 "data_offset": 2048, 00:14:56.306 "data_size": 63488 00:14:56.306 }, 00:14:56.306 { 00:14:56.306 "name": "BaseBdev4", 00:14:56.306 "uuid": "62dbfaad-42d0-11ef-96ac-773515fba644", 00:14:56.306 "is_configured": true, 00:14:56.306 "data_offset": 2048, 00:14:56.306 "data_size": 63488 00:14:56.306 } 00:14:56.306 ] 00:14:56.306 }' 00:14:56.306 17:33:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:56.306 17:33:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.564 17:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:56.564 17:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:56.821 17:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:14:56.821 17:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:57.079 [2024-07-15 17:33:52.737169] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:57.079 17:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:57.079 17:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:57.079 17:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:57.079 17:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:57.079 17:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:57.079 17:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:57.079 17:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:57.079 17:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:57.079 17:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:57.079 17:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:57.079 17:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.079 17:33:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:57.337 17:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:57.337 "name": "Existed_Raid", 00:14:57.337 "uuid": "63587e39-42d0-11ef-96ac-773515fba644", 00:14:57.337 "strip_size_kb": 64, 00:14:57.337 "state": "configuring", 00:14:57.337 "raid_level": "concat", 00:14:57.337 "superblock": true, 00:14:57.337 "num_base_bdevs": 4, 00:14:57.337 "num_base_bdevs_discovered": 3, 00:14:57.337 "num_base_bdevs_operational": 4, 00:14:57.337 "base_bdevs_list": [ 00:14:57.337 { 00:14:57.337 "name": null, 00:14:57.337 "uuid": "647d77ae-42d0-11ef-96ac-773515fba644", 00:14:57.337 "is_configured": false, 00:14:57.337 "data_offset": 2048, 00:14:57.337 "data_size": 63488 00:14:57.337 }, 00:14:57.337 { 00:14:57.337 "name": "BaseBdev2", 00:14:57.337 "uuid": "61ecb7f6-42d0-11ef-96ac-773515fba644", 00:14:57.337 "is_configured": true, 00:14:57.337 "data_offset": 2048, 00:14:57.337 "data_size": 63488 00:14:57.337 }, 00:14:57.337 { 00:14:57.337 "name": "BaseBdev3", 00:14:57.337 "uuid": "62659203-42d0-11ef-96ac-773515fba644", 00:14:57.337 "is_configured": true, 00:14:57.337 "data_offset": 2048, 00:14:57.337 "data_size": 63488 00:14:57.337 }, 00:14:57.337 { 00:14:57.337 "name": "BaseBdev4", 00:14:57.337 "uuid": "62dbfaad-42d0-11ef-96ac-773515fba644", 00:14:57.337 "is_configured": true, 00:14:57.337 "data_offset": 2048, 00:14:57.337 "data_size": 63488 00:14:57.337 } 00:14:57.337 ] 00:14:57.337 }' 00:14:57.337 17:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:57.337 17:33:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.594 17:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:57.594 17:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:58.160 17:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:14:58.160 17:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:58.160 17:33:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:58.418 17:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 647d77ae-42d0-11ef-96ac-773515fba644 00:14:58.418 [2024-07-15 17:33:54.241324] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:58.418 [2024-07-15 17:33:54.241377] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c2ad2434f00 00:14:58.418 [2024-07-15 17:33:54.241382] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:58.418 [2024-07-15 17:33:54.241403] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c2ad2497e20 00:14:58.418 [2024-07-15 17:33:54.241450] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c2ad2434f00 00:14:58.418 [2024-07-15 17:33:54.241454] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1c2ad2434f00 00:14:58.418 [2024-07-15 17:33:54.241474] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:58.418 NewBaseBdev 00:14:58.675 17:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:14:58.675 17:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:14:58.675 17:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:58.675 17:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:58.675 17:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:58.675 17:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:58.675 17:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:58.933 17:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:59.189 [ 00:14:59.189 { 00:14:59.189 "name": "NewBaseBdev", 00:14:59.189 "aliases": [ 00:14:59.189 "647d77ae-42d0-11ef-96ac-773515fba644" 00:14:59.189 ], 00:14:59.189 "product_name": "Malloc disk", 00:14:59.189 "block_size": 512, 00:14:59.189 "num_blocks": 65536, 00:14:59.189 "uuid": "647d77ae-42d0-11ef-96ac-773515fba644", 00:14:59.189 "assigned_rate_limits": { 00:14:59.189 "rw_ios_per_sec": 0, 00:14:59.189 "rw_mbytes_per_sec": 0, 00:14:59.189 "r_mbytes_per_sec": 0, 00:14:59.189 "w_mbytes_per_sec": 0 00:14:59.189 }, 00:14:59.189 "claimed": true, 00:14:59.189 "claim_type": "exclusive_write", 00:14:59.189 "zoned": false, 00:14:59.189 "supported_io_types": { 00:14:59.189 "read": true, 00:14:59.189 "write": true, 00:14:59.189 "unmap": true, 00:14:59.189 "flush": true, 00:14:59.190 "reset": true, 00:14:59.190 "nvme_admin": false, 00:14:59.190 "nvme_io": false, 00:14:59.190 "nvme_io_md": false, 00:14:59.190 "write_zeroes": true, 00:14:59.190 "zcopy": true, 00:14:59.190 "get_zone_info": false, 00:14:59.190 "zone_management": false, 00:14:59.190 "zone_append": false, 00:14:59.190 "compare": false, 00:14:59.190 "compare_and_write": false, 00:14:59.190 "abort": true, 00:14:59.190 "seek_hole": false, 00:14:59.190 "seek_data": false, 00:14:59.190 "copy": true, 00:14:59.190 "nvme_iov_md": false 00:14:59.190 }, 00:14:59.190 "memory_domains": [ 00:14:59.190 { 00:14:59.190 "dma_device_id": "system", 00:14:59.190 "dma_device_type": 1 00:14:59.190 }, 00:14:59.190 { 00:14:59.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.190 "dma_device_type": 2 00:14:59.190 } 00:14:59.190 ], 00:14:59.190 "driver_specific": {} 00:14:59.190 } 00:14:59.190 ] 00:14:59.190 17:33:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:59.190 17:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:59.190 17:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:59.190 17:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:59.190 17:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:59.190 17:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:59.190 17:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:59.190 17:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:59.190 17:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:59.190 17:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:59.190 17:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:59.190 17:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:59.190 17:33:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.447 17:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:59.447 "name": "Existed_Raid", 00:14:59.447 "uuid": "63587e39-42d0-11ef-96ac-773515fba644", 00:14:59.447 "strip_size_kb": 64, 00:14:59.447 "state": "online", 00:14:59.447 "raid_level": "concat", 00:14:59.447 "superblock": true, 00:14:59.447 "num_base_bdevs": 4, 00:14:59.447 "num_base_bdevs_discovered": 4, 00:14:59.447 "num_base_bdevs_operational": 4, 00:14:59.447 "base_bdevs_list": [ 00:14:59.447 { 00:14:59.447 "name": "NewBaseBdev", 00:14:59.447 "uuid": "647d77ae-42d0-11ef-96ac-773515fba644", 00:14:59.447 "is_configured": true, 00:14:59.447 "data_offset": 2048, 00:14:59.447 "data_size": 63488 00:14:59.447 }, 00:14:59.447 { 00:14:59.447 "name": "BaseBdev2", 00:14:59.447 "uuid": "61ecb7f6-42d0-11ef-96ac-773515fba644", 00:14:59.447 "is_configured": true, 00:14:59.447 "data_offset": 2048, 00:14:59.447 "data_size": 63488 00:14:59.447 }, 00:14:59.447 { 00:14:59.447 "name": "BaseBdev3", 00:14:59.447 "uuid": "62659203-42d0-11ef-96ac-773515fba644", 00:14:59.447 "is_configured": true, 00:14:59.447 "data_offset": 2048, 00:14:59.447 "data_size": 63488 00:14:59.447 }, 00:14:59.447 { 00:14:59.447 "name": "BaseBdev4", 00:14:59.447 "uuid": "62dbfaad-42d0-11ef-96ac-773515fba644", 00:14:59.447 "is_configured": true, 00:14:59.447 "data_offset": 2048, 00:14:59.447 "data_size": 63488 00:14:59.447 } 00:14:59.447 ] 00:14:59.447 }' 00:14:59.447 17:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:59.447 17:33:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.704 17:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:14:59.704 17:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:59.704 17:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:59.704 17:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:59.704 17:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:59.704 17:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:59.704 17:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:59.704 17:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:59.961 [2024-07-15 17:33:55.641261] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:59.962 17:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:59.962 "name": "Existed_Raid", 00:14:59.962 "aliases": [ 00:14:59.962 "63587e39-42d0-11ef-96ac-773515fba644" 00:14:59.962 ], 00:14:59.962 "product_name": "Raid Volume", 00:14:59.962 "block_size": 512, 00:14:59.962 "num_blocks": 253952, 00:14:59.962 "uuid": "63587e39-42d0-11ef-96ac-773515fba644", 00:14:59.962 "assigned_rate_limits": { 00:14:59.962 "rw_ios_per_sec": 0, 00:14:59.962 "rw_mbytes_per_sec": 0, 00:14:59.962 "r_mbytes_per_sec": 0, 00:14:59.962 "w_mbytes_per_sec": 0 00:14:59.962 }, 00:14:59.962 "claimed": false, 00:14:59.962 "zoned": false, 00:14:59.962 "supported_io_types": { 00:14:59.962 "read": true, 00:14:59.962 "write": true, 00:14:59.962 "unmap": true, 00:14:59.962 "flush": true, 00:14:59.962 "reset": true, 00:14:59.962 "nvme_admin": false, 00:14:59.962 "nvme_io": false, 00:14:59.962 "nvme_io_md": false, 00:14:59.962 "write_zeroes": true, 00:14:59.962 "zcopy": false, 00:14:59.962 "get_zone_info": false, 00:14:59.962 "zone_management": false, 00:14:59.962 "zone_append": false, 00:14:59.962 "compare": false, 00:14:59.962 "compare_and_write": false, 00:14:59.962 "abort": false, 00:14:59.962 "seek_hole": false, 00:14:59.962 "seek_data": false, 00:14:59.962 "copy": false, 00:14:59.962 "nvme_iov_md": false 00:14:59.962 }, 00:14:59.962 "memory_domains": [ 00:14:59.962 { 00:14:59.962 "dma_device_id": "system", 00:14:59.962 "dma_device_type": 1 00:14:59.962 }, 00:14:59.962 { 00:14:59.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.962 "dma_device_type": 2 00:14:59.962 }, 00:14:59.962 { 00:14:59.962 "dma_device_id": "system", 00:14:59.962 "dma_device_type": 1 00:14:59.962 }, 00:14:59.962 { 00:14:59.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.962 "dma_device_type": 2 00:14:59.962 }, 00:14:59.962 { 00:14:59.962 "dma_device_id": "system", 00:14:59.962 "dma_device_type": 1 00:14:59.962 }, 00:14:59.962 { 00:14:59.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.962 "dma_device_type": 2 00:14:59.962 }, 00:14:59.962 { 00:14:59.962 "dma_device_id": "system", 00:14:59.962 "dma_device_type": 1 00:14:59.962 }, 00:14:59.962 { 00:14:59.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:59.962 "dma_device_type": 2 00:14:59.962 } 00:14:59.962 ], 00:14:59.962 "driver_specific": { 00:14:59.962 "raid": { 00:14:59.962 "uuid": "63587e39-42d0-11ef-96ac-773515fba644", 00:14:59.962 "strip_size_kb": 64, 00:14:59.962 "state": "online", 00:14:59.962 "raid_level": "concat", 00:14:59.962 "superblock": true, 00:14:59.962 "num_base_bdevs": 4, 00:14:59.962 "num_base_bdevs_discovered": 4, 00:14:59.962 "num_base_bdevs_operational": 4, 00:14:59.962 "base_bdevs_list": [ 00:14:59.962 { 00:14:59.962 "name": "NewBaseBdev", 00:14:59.962 "uuid": "647d77ae-42d0-11ef-96ac-773515fba644", 00:14:59.962 "is_configured": true, 00:14:59.962 "data_offset": 2048, 00:14:59.962 "data_size": 63488 00:14:59.962 }, 00:14:59.962 { 00:14:59.962 "name": "BaseBdev2", 00:14:59.962 "uuid": "61ecb7f6-42d0-11ef-96ac-773515fba644", 00:14:59.962 "is_configured": true, 00:14:59.962 "data_offset": 2048, 00:14:59.962 "data_size": 63488 00:14:59.962 }, 00:14:59.962 { 00:14:59.962 "name": "BaseBdev3", 00:14:59.962 "uuid": "62659203-42d0-11ef-96ac-773515fba644", 00:14:59.962 "is_configured": true, 00:14:59.962 "data_offset": 2048, 00:14:59.962 "data_size": 63488 00:14:59.962 }, 00:14:59.962 { 00:14:59.962 "name": "BaseBdev4", 00:14:59.962 "uuid": "62dbfaad-42d0-11ef-96ac-773515fba644", 00:14:59.962 "is_configured": true, 00:14:59.962 "data_offset": 2048, 00:14:59.962 "data_size": 63488 00:14:59.962 } 00:14:59.962 ] 00:14:59.962 } 00:14:59.962 } 00:14:59.962 }' 00:14:59.962 17:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:59.962 17:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:14:59.962 BaseBdev2 00:14:59.962 BaseBdev3 00:14:59.962 BaseBdev4' 00:14:59.962 17:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:59.962 17:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:59.962 17:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:00.219 17:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:00.219 "name": "NewBaseBdev", 00:15:00.219 "aliases": [ 00:15:00.219 "647d77ae-42d0-11ef-96ac-773515fba644" 00:15:00.219 ], 00:15:00.219 "product_name": "Malloc disk", 00:15:00.219 "block_size": 512, 00:15:00.219 "num_blocks": 65536, 00:15:00.219 "uuid": "647d77ae-42d0-11ef-96ac-773515fba644", 00:15:00.219 "assigned_rate_limits": { 00:15:00.219 "rw_ios_per_sec": 0, 00:15:00.219 "rw_mbytes_per_sec": 0, 00:15:00.219 "r_mbytes_per_sec": 0, 00:15:00.219 "w_mbytes_per_sec": 0 00:15:00.219 }, 00:15:00.219 "claimed": true, 00:15:00.219 "claim_type": "exclusive_write", 00:15:00.219 "zoned": false, 00:15:00.219 "supported_io_types": { 00:15:00.219 "read": true, 00:15:00.219 "write": true, 00:15:00.219 "unmap": true, 00:15:00.219 "flush": true, 00:15:00.219 "reset": true, 00:15:00.219 "nvme_admin": false, 00:15:00.219 "nvme_io": false, 00:15:00.219 "nvme_io_md": false, 00:15:00.219 "write_zeroes": true, 00:15:00.219 "zcopy": true, 00:15:00.219 "get_zone_info": false, 00:15:00.219 "zone_management": false, 00:15:00.219 "zone_append": false, 00:15:00.219 "compare": false, 00:15:00.219 "compare_and_write": false, 00:15:00.219 "abort": true, 00:15:00.219 "seek_hole": false, 00:15:00.219 "seek_data": false, 00:15:00.219 "copy": true, 00:15:00.219 "nvme_iov_md": false 00:15:00.219 }, 00:15:00.219 "memory_domains": [ 00:15:00.219 { 00:15:00.219 "dma_device_id": "system", 00:15:00.219 "dma_device_type": 1 00:15:00.219 }, 00:15:00.219 { 00:15:00.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.219 "dma_device_type": 2 00:15:00.219 } 00:15:00.219 ], 00:15:00.219 "driver_specific": {} 00:15:00.219 }' 00:15:00.219 17:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:00.219 17:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:00.220 17:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:00.220 17:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:00.220 17:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:00.220 17:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:00.220 17:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:00.220 17:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:00.220 17:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:00.220 17:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:00.220 17:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:00.220 17:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:00.220 17:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:00.220 17:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:00.220 17:33:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:00.477 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:00.477 "name": "BaseBdev2", 00:15:00.477 "aliases": [ 00:15:00.477 "61ecb7f6-42d0-11ef-96ac-773515fba644" 00:15:00.477 ], 00:15:00.477 "product_name": "Malloc disk", 00:15:00.477 "block_size": 512, 00:15:00.477 "num_blocks": 65536, 00:15:00.477 "uuid": "61ecb7f6-42d0-11ef-96ac-773515fba644", 00:15:00.477 "assigned_rate_limits": { 00:15:00.477 "rw_ios_per_sec": 0, 00:15:00.477 "rw_mbytes_per_sec": 0, 00:15:00.477 "r_mbytes_per_sec": 0, 00:15:00.477 "w_mbytes_per_sec": 0 00:15:00.477 }, 00:15:00.477 "claimed": true, 00:15:00.477 "claim_type": "exclusive_write", 00:15:00.477 "zoned": false, 00:15:00.477 "supported_io_types": { 00:15:00.477 "read": true, 00:15:00.477 "write": true, 00:15:00.477 "unmap": true, 00:15:00.477 "flush": true, 00:15:00.477 "reset": true, 00:15:00.477 "nvme_admin": false, 00:15:00.477 "nvme_io": false, 00:15:00.477 "nvme_io_md": false, 00:15:00.477 "write_zeroes": true, 00:15:00.477 "zcopy": true, 00:15:00.477 "get_zone_info": false, 00:15:00.477 "zone_management": false, 00:15:00.477 "zone_append": false, 00:15:00.477 "compare": false, 00:15:00.477 "compare_and_write": false, 00:15:00.477 "abort": true, 00:15:00.477 "seek_hole": false, 00:15:00.477 "seek_data": false, 00:15:00.477 "copy": true, 00:15:00.477 "nvme_iov_md": false 00:15:00.477 }, 00:15:00.478 "memory_domains": [ 00:15:00.478 { 00:15:00.478 "dma_device_id": "system", 00:15:00.478 "dma_device_type": 1 00:15:00.478 }, 00:15:00.478 { 00:15:00.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.478 "dma_device_type": 2 00:15:00.478 } 00:15:00.478 ], 00:15:00.478 "driver_specific": {} 00:15:00.478 }' 00:15:00.478 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:00.478 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:00.478 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:00.478 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:00.478 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:00.478 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:00.478 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:00.478 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:00.478 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:00.478 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:00.478 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:00.478 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:00.478 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:00.478 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:00.478 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:00.735 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:00.735 "name": "BaseBdev3", 00:15:00.735 "aliases": [ 00:15:00.735 "62659203-42d0-11ef-96ac-773515fba644" 00:15:00.735 ], 00:15:00.735 "product_name": "Malloc disk", 00:15:00.735 "block_size": 512, 00:15:00.735 "num_blocks": 65536, 00:15:00.735 "uuid": "62659203-42d0-11ef-96ac-773515fba644", 00:15:00.735 "assigned_rate_limits": { 00:15:00.735 "rw_ios_per_sec": 0, 00:15:00.735 "rw_mbytes_per_sec": 0, 00:15:00.735 "r_mbytes_per_sec": 0, 00:15:00.735 "w_mbytes_per_sec": 0 00:15:00.735 }, 00:15:00.735 "claimed": true, 00:15:00.735 "claim_type": "exclusive_write", 00:15:00.735 "zoned": false, 00:15:00.735 "supported_io_types": { 00:15:00.735 "read": true, 00:15:00.735 "write": true, 00:15:00.735 "unmap": true, 00:15:00.735 "flush": true, 00:15:00.735 "reset": true, 00:15:00.735 "nvme_admin": false, 00:15:00.735 "nvme_io": false, 00:15:00.735 "nvme_io_md": false, 00:15:00.735 "write_zeroes": true, 00:15:00.735 "zcopy": true, 00:15:00.735 "get_zone_info": false, 00:15:00.735 "zone_management": false, 00:15:00.735 "zone_append": false, 00:15:00.735 "compare": false, 00:15:00.735 "compare_and_write": false, 00:15:00.735 "abort": true, 00:15:00.735 "seek_hole": false, 00:15:00.735 "seek_data": false, 00:15:00.735 "copy": true, 00:15:00.735 "nvme_iov_md": false 00:15:00.735 }, 00:15:00.735 "memory_domains": [ 00:15:00.735 { 00:15:00.735 "dma_device_id": "system", 00:15:00.735 "dma_device_type": 1 00:15:00.735 }, 00:15:00.735 { 00:15:00.735 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:00.735 "dma_device_type": 2 00:15:00.735 } 00:15:00.735 ], 00:15:00.735 "driver_specific": {} 00:15:00.735 }' 00:15:00.735 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:00.735 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:00.735 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:00.735 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:00.992 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:00.993 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:00.993 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:00.993 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:00.993 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:00.993 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:00.993 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:00.993 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:00.993 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:00.993 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:00.993 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:01.281 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:01.281 "name": "BaseBdev4", 00:15:01.281 "aliases": [ 00:15:01.281 "62dbfaad-42d0-11ef-96ac-773515fba644" 00:15:01.281 ], 00:15:01.281 "product_name": "Malloc disk", 00:15:01.281 "block_size": 512, 00:15:01.281 "num_blocks": 65536, 00:15:01.281 "uuid": "62dbfaad-42d0-11ef-96ac-773515fba644", 00:15:01.281 "assigned_rate_limits": { 00:15:01.281 "rw_ios_per_sec": 0, 00:15:01.281 "rw_mbytes_per_sec": 0, 00:15:01.281 "r_mbytes_per_sec": 0, 00:15:01.281 "w_mbytes_per_sec": 0 00:15:01.281 }, 00:15:01.281 "claimed": true, 00:15:01.281 "claim_type": "exclusive_write", 00:15:01.281 "zoned": false, 00:15:01.281 "supported_io_types": { 00:15:01.281 "read": true, 00:15:01.281 "write": true, 00:15:01.281 "unmap": true, 00:15:01.281 "flush": true, 00:15:01.281 "reset": true, 00:15:01.281 "nvme_admin": false, 00:15:01.281 "nvme_io": false, 00:15:01.281 "nvme_io_md": false, 00:15:01.281 "write_zeroes": true, 00:15:01.281 "zcopy": true, 00:15:01.281 "get_zone_info": false, 00:15:01.281 "zone_management": false, 00:15:01.281 "zone_append": false, 00:15:01.281 "compare": false, 00:15:01.281 "compare_and_write": false, 00:15:01.281 "abort": true, 00:15:01.281 "seek_hole": false, 00:15:01.281 "seek_data": false, 00:15:01.281 "copy": true, 00:15:01.281 "nvme_iov_md": false 00:15:01.281 }, 00:15:01.281 "memory_domains": [ 00:15:01.281 { 00:15:01.281 "dma_device_id": "system", 00:15:01.281 "dma_device_type": 1 00:15:01.281 }, 00:15:01.281 { 00:15:01.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.281 "dma_device_type": 2 00:15:01.281 } 00:15:01.281 ], 00:15:01.281 "driver_specific": {} 00:15:01.281 }' 00:15:01.281 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:01.281 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:01.281 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:01.281 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:01.281 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:01.281 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:01.281 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:01.281 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:01.281 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:01.281 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:01.281 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:01.281 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:01.281 17:33:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:01.539 [2024-07-15 17:33:57.189322] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:01.539 [2024-07-15 17:33:57.189345] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:01.539 [2024-07-15 17:33:57.189384] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:01.539 [2024-07-15 17:33:57.189399] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:01.539 [2024-07-15 17:33:57.189403] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c2ad2434f00 name Existed_Raid, state offline 00:15:01.539 17:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 61494 00:15:01.539 17:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 61494 ']' 00:15:01.539 17:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 61494 00:15:01.539 17:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:15:01.539 17:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:15:01.539 17:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 61494 00:15:01.539 17:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:15:01.539 17:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:15:01.539 killing process with pid 61494 00:15:01.539 17:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:15:01.539 17:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61494' 00:15:01.539 17:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 61494 00:15:01.539 [2024-07-15 17:33:57.219183] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:01.539 17:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 61494 00:15:01.539 [2024-07-15 17:33:57.242758] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:01.797 17:33:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:15:01.797 ************************************ 00:15:01.797 00:15:01.797 real 0m27.193s 00:15:01.797 user 0m49.860s 00:15:01.797 sys 0m3.649s 00:15:01.797 17:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:01.797 17:33:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.797 END TEST raid_state_function_test_sb 00:15:01.797 ************************************ 00:15:01.797 17:33:57 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:01.797 17:33:57 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:15:01.797 17:33:57 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:15:01.797 17:33:57 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:01.797 17:33:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:01.797 ************************************ 00:15:01.797 START TEST raid_superblock_test 00:15:01.797 ************************************ 00:15:01.797 17:33:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 4 00:15:01.797 17:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:15:01.797 17:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:15:01.797 17:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:15:01.797 17:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:15:01.797 17:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:15:01.797 17:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:15:01.797 17:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:15:01.797 17:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:15:01.797 17:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:15:01.797 17:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:15:01.797 17:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:15:01.797 17:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:15:01.797 17:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:15:01.797 17:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:15:01.797 17:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:15:01.797 17:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:15:01.797 17:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=62312 00:15:01.797 17:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:15:01.797 17:33:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 62312 /var/tmp/spdk-raid.sock 00:15:01.797 17:33:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 62312 ']' 00:15:01.797 17:33:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:01.797 17:33:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:01.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:01.797 17:33:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:01.797 17:33:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:01.797 17:33:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:01.797 [2024-07-15 17:33:57.478846] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:15:01.797 [2024-07-15 17:33:57.479078] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:02.363 EAL: TSC is not safe to use in SMP mode 00:15:02.363 EAL: TSC is not invariant 00:15:02.363 [2024-07-15 17:33:58.002345] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.363 [2024-07-15 17:33:58.086581] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:02.363 [2024-07-15 17:33:58.088758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.363 [2024-07-15 17:33:58.089567] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:02.363 [2024-07-15 17:33:58.089579] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:02.928 17:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:02.928 17:33:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:15:02.928 17:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:15:02.928 17:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:02.928 17:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:15:02.928 17:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:15:02.928 17:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:15:02.928 17:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:02.928 17:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:02.928 17:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:02.928 17:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:15:03.185 malloc1 00:15:03.185 17:33:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:03.443 [2024-07-15 17:33:59.046045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:03.443 [2024-07-15 17:33:59.046093] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.443 [2024-07-15 17:33:59.046105] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20a35b234780 00:15:03.443 [2024-07-15 17:33:59.046113] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.443 [2024-07-15 17:33:59.046996] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.443 [2024-07-15 17:33:59.047019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:03.443 pt1 00:15:03.443 17:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:03.443 17:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:03.443 17:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:15:03.443 17:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:15:03.443 17:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:15:03.443 17:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:03.443 17:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:03.443 17:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:03.443 17:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:15:03.702 malloc2 00:15:03.702 17:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:03.961 [2024-07-15 17:33:59.558053] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:03.961 [2024-07-15 17:33:59.558119] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:03.961 [2024-07-15 17:33:59.558148] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20a35b234c80 00:15:03.961 [2024-07-15 17:33:59.558156] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:03.961 [2024-07-15 17:33:59.558839] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:03.961 [2024-07-15 17:33:59.558862] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:03.961 pt2 00:15:03.961 17:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:03.961 17:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:03.961 17:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:15:03.961 17:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:15:03.961 17:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:15:03.961 17:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:03.961 17:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:03.961 17:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:03.961 17:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:15:04.220 malloc3 00:15:04.220 17:33:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:04.220 [2024-07-15 17:34:00.026058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:04.220 [2024-07-15 17:34:00.026122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.220 [2024-07-15 17:34:00.026149] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20a35b235180 00:15:04.220 [2024-07-15 17:34:00.026158] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.220 [2024-07-15 17:34:00.026787] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.220 [2024-07-15 17:34:00.026811] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:04.220 pt3 00:15:04.220 17:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:04.220 17:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:04.220 17:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:15:04.220 17:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:15:04.220 17:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:15:04.220 17:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:15:04.220 17:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:15:04.220 17:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:15:04.220 17:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:15:04.479 malloc4 00:15:04.479 17:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:04.737 [2024-07-15 17:34:00.530099] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:04.737 [2024-07-15 17:34:00.530153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.737 [2024-07-15 17:34:00.530165] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20a35b235680 00:15:04.737 [2024-07-15 17:34:00.530173] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.737 [2024-07-15 17:34:00.530800] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.737 [2024-07-15 17:34:00.530837] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:04.737 pt4 00:15:04.737 17:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:04.737 17:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:04.737 17:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:15:04.995 [2024-07-15 17:34:00.766095] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:04.995 [2024-07-15 17:34:00.766675] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:04.995 [2024-07-15 17:34:00.766698] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:04.995 [2024-07-15 17:34:00.766709] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:04.995 [2024-07-15 17:34:00.766763] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x20a35b235900 00:15:04.995 [2024-07-15 17:34:00.766769] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:04.995 [2024-07-15 17:34:00.766800] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20a35b297e20 00:15:04.995 [2024-07-15 17:34:00.766876] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x20a35b235900 00:15:04.995 [2024-07-15 17:34:00.766880] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x20a35b235900 00:15:04.995 [2024-07-15 17:34:00.766916] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.995 17:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:04.995 17:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:04.995 17:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:04.995 17:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:04.995 17:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:04.995 17:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:04.995 17:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:04.995 17:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:04.995 17:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:04.995 17:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:04.995 17:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.995 17:34:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.561 17:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:05.561 "name": "raid_bdev1", 00:15:05.561 "uuid": "6c329937-42d0-11ef-96ac-773515fba644", 00:15:05.561 "strip_size_kb": 64, 00:15:05.561 "state": "online", 00:15:05.561 "raid_level": "concat", 00:15:05.561 "superblock": true, 00:15:05.561 "num_base_bdevs": 4, 00:15:05.561 "num_base_bdevs_discovered": 4, 00:15:05.561 "num_base_bdevs_operational": 4, 00:15:05.561 "base_bdevs_list": [ 00:15:05.561 { 00:15:05.561 "name": "pt1", 00:15:05.561 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:05.561 "is_configured": true, 00:15:05.561 "data_offset": 2048, 00:15:05.561 "data_size": 63488 00:15:05.561 }, 00:15:05.561 { 00:15:05.561 "name": "pt2", 00:15:05.561 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:05.561 "is_configured": true, 00:15:05.561 "data_offset": 2048, 00:15:05.561 "data_size": 63488 00:15:05.561 }, 00:15:05.561 { 00:15:05.561 "name": "pt3", 00:15:05.561 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:05.561 "is_configured": true, 00:15:05.561 "data_offset": 2048, 00:15:05.561 "data_size": 63488 00:15:05.561 }, 00:15:05.561 { 00:15:05.561 "name": "pt4", 00:15:05.561 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:05.561 "is_configured": true, 00:15:05.561 "data_offset": 2048, 00:15:05.561 "data_size": 63488 00:15:05.561 } 00:15:05.561 ] 00:15:05.561 }' 00:15:05.561 17:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:05.561 17:34:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.819 17:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:15:05.819 17:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:05.819 17:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:05.819 17:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:05.819 17:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:05.819 17:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:05.819 17:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:05.819 17:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:06.078 [2024-07-15 17:34:01.746218] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:06.078 17:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:06.078 "name": "raid_bdev1", 00:15:06.078 "aliases": [ 00:15:06.078 "6c329937-42d0-11ef-96ac-773515fba644" 00:15:06.078 ], 00:15:06.078 "product_name": "Raid Volume", 00:15:06.078 "block_size": 512, 00:15:06.078 "num_blocks": 253952, 00:15:06.078 "uuid": "6c329937-42d0-11ef-96ac-773515fba644", 00:15:06.078 "assigned_rate_limits": { 00:15:06.078 "rw_ios_per_sec": 0, 00:15:06.078 "rw_mbytes_per_sec": 0, 00:15:06.078 "r_mbytes_per_sec": 0, 00:15:06.078 "w_mbytes_per_sec": 0 00:15:06.078 }, 00:15:06.078 "claimed": false, 00:15:06.078 "zoned": false, 00:15:06.078 "supported_io_types": { 00:15:06.078 "read": true, 00:15:06.078 "write": true, 00:15:06.078 "unmap": true, 00:15:06.078 "flush": true, 00:15:06.078 "reset": true, 00:15:06.078 "nvme_admin": false, 00:15:06.078 "nvme_io": false, 00:15:06.078 "nvme_io_md": false, 00:15:06.078 "write_zeroes": true, 00:15:06.078 "zcopy": false, 00:15:06.078 "get_zone_info": false, 00:15:06.078 "zone_management": false, 00:15:06.078 "zone_append": false, 00:15:06.078 "compare": false, 00:15:06.078 "compare_and_write": false, 00:15:06.078 "abort": false, 00:15:06.078 "seek_hole": false, 00:15:06.078 "seek_data": false, 00:15:06.078 "copy": false, 00:15:06.078 "nvme_iov_md": false 00:15:06.078 }, 00:15:06.078 "memory_domains": [ 00:15:06.078 { 00:15:06.078 "dma_device_id": "system", 00:15:06.078 "dma_device_type": 1 00:15:06.078 }, 00:15:06.078 { 00:15:06.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.078 "dma_device_type": 2 00:15:06.078 }, 00:15:06.078 { 00:15:06.078 "dma_device_id": "system", 00:15:06.078 "dma_device_type": 1 00:15:06.078 }, 00:15:06.078 { 00:15:06.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.078 "dma_device_type": 2 00:15:06.078 }, 00:15:06.078 { 00:15:06.078 "dma_device_id": "system", 00:15:06.078 "dma_device_type": 1 00:15:06.078 }, 00:15:06.078 { 00:15:06.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.078 "dma_device_type": 2 00:15:06.078 }, 00:15:06.078 { 00:15:06.078 "dma_device_id": "system", 00:15:06.078 "dma_device_type": 1 00:15:06.078 }, 00:15:06.078 { 00:15:06.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.078 "dma_device_type": 2 00:15:06.078 } 00:15:06.078 ], 00:15:06.078 "driver_specific": { 00:15:06.078 "raid": { 00:15:06.078 "uuid": "6c329937-42d0-11ef-96ac-773515fba644", 00:15:06.078 "strip_size_kb": 64, 00:15:06.078 "state": "online", 00:15:06.078 "raid_level": "concat", 00:15:06.078 "superblock": true, 00:15:06.078 "num_base_bdevs": 4, 00:15:06.078 "num_base_bdevs_discovered": 4, 00:15:06.078 "num_base_bdevs_operational": 4, 00:15:06.078 "base_bdevs_list": [ 00:15:06.078 { 00:15:06.078 "name": "pt1", 00:15:06.078 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:06.078 "is_configured": true, 00:15:06.078 "data_offset": 2048, 00:15:06.078 "data_size": 63488 00:15:06.078 }, 00:15:06.078 { 00:15:06.078 "name": "pt2", 00:15:06.078 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:06.078 "is_configured": true, 00:15:06.078 "data_offset": 2048, 00:15:06.078 "data_size": 63488 00:15:06.078 }, 00:15:06.078 { 00:15:06.078 "name": "pt3", 00:15:06.078 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:06.078 "is_configured": true, 00:15:06.078 "data_offset": 2048, 00:15:06.078 "data_size": 63488 00:15:06.078 }, 00:15:06.078 { 00:15:06.078 "name": "pt4", 00:15:06.078 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:06.078 "is_configured": true, 00:15:06.078 "data_offset": 2048, 00:15:06.078 "data_size": 63488 00:15:06.078 } 00:15:06.078 ] 00:15:06.078 } 00:15:06.078 } 00:15:06.078 }' 00:15:06.079 17:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:06.079 17:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:06.079 pt2 00:15:06.079 pt3 00:15:06.079 pt4' 00:15:06.079 17:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:06.079 17:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:06.079 17:34:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:06.337 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:06.337 "name": "pt1", 00:15:06.337 "aliases": [ 00:15:06.337 "00000000-0000-0000-0000-000000000001" 00:15:06.337 ], 00:15:06.338 "product_name": "passthru", 00:15:06.338 "block_size": 512, 00:15:06.338 "num_blocks": 65536, 00:15:06.338 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:06.338 "assigned_rate_limits": { 00:15:06.338 "rw_ios_per_sec": 0, 00:15:06.338 "rw_mbytes_per_sec": 0, 00:15:06.338 "r_mbytes_per_sec": 0, 00:15:06.338 "w_mbytes_per_sec": 0 00:15:06.338 }, 00:15:06.338 "claimed": true, 00:15:06.338 "claim_type": "exclusive_write", 00:15:06.338 "zoned": false, 00:15:06.338 "supported_io_types": { 00:15:06.338 "read": true, 00:15:06.338 "write": true, 00:15:06.338 "unmap": true, 00:15:06.338 "flush": true, 00:15:06.338 "reset": true, 00:15:06.338 "nvme_admin": false, 00:15:06.338 "nvme_io": false, 00:15:06.338 "nvme_io_md": false, 00:15:06.338 "write_zeroes": true, 00:15:06.338 "zcopy": true, 00:15:06.338 "get_zone_info": false, 00:15:06.338 "zone_management": false, 00:15:06.338 "zone_append": false, 00:15:06.338 "compare": false, 00:15:06.338 "compare_and_write": false, 00:15:06.338 "abort": true, 00:15:06.338 "seek_hole": false, 00:15:06.338 "seek_data": false, 00:15:06.338 "copy": true, 00:15:06.338 "nvme_iov_md": false 00:15:06.338 }, 00:15:06.338 "memory_domains": [ 00:15:06.338 { 00:15:06.338 "dma_device_id": "system", 00:15:06.338 "dma_device_type": 1 00:15:06.338 }, 00:15:06.338 { 00:15:06.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.338 "dma_device_type": 2 00:15:06.338 } 00:15:06.338 ], 00:15:06.338 "driver_specific": { 00:15:06.338 "passthru": { 00:15:06.338 "name": "pt1", 00:15:06.338 "base_bdev_name": "malloc1" 00:15:06.338 } 00:15:06.338 } 00:15:06.338 }' 00:15:06.338 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:06.338 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:06.338 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:06.338 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:06.338 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:06.338 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:06.338 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:06.338 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:06.338 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:06.338 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:06.338 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:06.338 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:06.338 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:06.338 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:06.338 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:06.597 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:06.597 "name": "pt2", 00:15:06.597 "aliases": [ 00:15:06.597 "00000000-0000-0000-0000-000000000002" 00:15:06.597 ], 00:15:06.597 "product_name": "passthru", 00:15:06.597 "block_size": 512, 00:15:06.597 "num_blocks": 65536, 00:15:06.597 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:06.597 "assigned_rate_limits": { 00:15:06.597 "rw_ios_per_sec": 0, 00:15:06.597 "rw_mbytes_per_sec": 0, 00:15:06.597 "r_mbytes_per_sec": 0, 00:15:06.597 "w_mbytes_per_sec": 0 00:15:06.597 }, 00:15:06.597 "claimed": true, 00:15:06.597 "claim_type": "exclusive_write", 00:15:06.597 "zoned": false, 00:15:06.597 "supported_io_types": { 00:15:06.597 "read": true, 00:15:06.597 "write": true, 00:15:06.597 "unmap": true, 00:15:06.597 "flush": true, 00:15:06.597 "reset": true, 00:15:06.597 "nvme_admin": false, 00:15:06.597 "nvme_io": false, 00:15:06.597 "nvme_io_md": false, 00:15:06.597 "write_zeroes": true, 00:15:06.597 "zcopy": true, 00:15:06.597 "get_zone_info": false, 00:15:06.597 "zone_management": false, 00:15:06.597 "zone_append": false, 00:15:06.597 "compare": false, 00:15:06.597 "compare_and_write": false, 00:15:06.597 "abort": true, 00:15:06.597 "seek_hole": false, 00:15:06.597 "seek_data": false, 00:15:06.597 "copy": true, 00:15:06.597 "nvme_iov_md": false 00:15:06.597 }, 00:15:06.597 "memory_domains": [ 00:15:06.597 { 00:15:06.597 "dma_device_id": "system", 00:15:06.597 "dma_device_type": 1 00:15:06.597 }, 00:15:06.597 { 00:15:06.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:06.597 "dma_device_type": 2 00:15:06.597 } 00:15:06.597 ], 00:15:06.597 "driver_specific": { 00:15:06.597 "passthru": { 00:15:06.597 "name": "pt2", 00:15:06.597 "base_bdev_name": "malloc2" 00:15:06.597 } 00:15:06.597 } 00:15:06.597 }' 00:15:06.597 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:06.597 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:06.597 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:06.597 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:06.597 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:06.597 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:06.597 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:06.597 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:06.597 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:06.597 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:06.597 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:06.857 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:06.857 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:06.857 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:06.857 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:07.116 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:07.116 "name": "pt3", 00:15:07.116 "aliases": [ 00:15:07.116 "00000000-0000-0000-0000-000000000003" 00:15:07.116 ], 00:15:07.116 "product_name": "passthru", 00:15:07.116 "block_size": 512, 00:15:07.116 "num_blocks": 65536, 00:15:07.116 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:07.116 "assigned_rate_limits": { 00:15:07.116 "rw_ios_per_sec": 0, 00:15:07.116 "rw_mbytes_per_sec": 0, 00:15:07.116 "r_mbytes_per_sec": 0, 00:15:07.116 "w_mbytes_per_sec": 0 00:15:07.116 }, 00:15:07.116 "claimed": true, 00:15:07.116 "claim_type": "exclusive_write", 00:15:07.116 "zoned": false, 00:15:07.116 "supported_io_types": { 00:15:07.116 "read": true, 00:15:07.116 "write": true, 00:15:07.116 "unmap": true, 00:15:07.116 "flush": true, 00:15:07.116 "reset": true, 00:15:07.116 "nvme_admin": false, 00:15:07.116 "nvme_io": false, 00:15:07.116 "nvme_io_md": false, 00:15:07.116 "write_zeroes": true, 00:15:07.116 "zcopy": true, 00:15:07.116 "get_zone_info": false, 00:15:07.116 "zone_management": false, 00:15:07.116 "zone_append": false, 00:15:07.116 "compare": false, 00:15:07.116 "compare_and_write": false, 00:15:07.116 "abort": true, 00:15:07.116 "seek_hole": false, 00:15:07.116 "seek_data": false, 00:15:07.116 "copy": true, 00:15:07.116 "nvme_iov_md": false 00:15:07.116 }, 00:15:07.116 "memory_domains": [ 00:15:07.116 { 00:15:07.116 "dma_device_id": "system", 00:15:07.116 "dma_device_type": 1 00:15:07.116 }, 00:15:07.116 { 00:15:07.116 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.116 "dma_device_type": 2 00:15:07.116 } 00:15:07.116 ], 00:15:07.116 "driver_specific": { 00:15:07.116 "passthru": { 00:15:07.116 "name": "pt3", 00:15:07.116 "base_bdev_name": "malloc3" 00:15:07.116 } 00:15:07.116 } 00:15:07.116 }' 00:15:07.116 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:07.116 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:07.116 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:07.116 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:07.116 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:07.116 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:07.116 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:07.116 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:07.116 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:07.116 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:07.116 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:07.116 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:07.116 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:07.116 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:15:07.116 17:34:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:07.375 17:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:07.375 "name": "pt4", 00:15:07.375 "aliases": [ 00:15:07.375 "00000000-0000-0000-0000-000000000004" 00:15:07.375 ], 00:15:07.375 "product_name": "passthru", 00:15:07.375 "block_size": 512, 00:15:07.375 "num_blocks": 65536, 00:15:07.375 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:07.375 "assigned_rate_limits": { 00:15:07.375 "rw_ios_per_sec": 0, 00:15:07.375 "rw_mbytes_per_sec": 0, 00:15:07.375 "r_mbytes_per_sec": 0, 00:15:07.375 "w_mbytes_per_sec": 0 00:15:07.375 }, 00:15:07.375 "claimed": true, 00:15:07.375 "claim_type": "exclusive_write", 00:15:07.375 "zoned": false, 00:15:07.375 "supported_io_types": { 00:15:07.375 "read": true, 00:15:07.375 "write": true, 00:15:07.375 "unmap": true, 00:15:07.375 "flush": true, 00:15:07.375 "reset": true, 00:15:07.375 "nvme_admin": false, 00:15:07.375 "nvme_io": false, 00:15:07.375 "nvme_io_md": false, 00:15:07.375 "write_zeroes": true, 00:15:07.375 "zcopy": true, 00:15:07.375 "get_zone_info": false, 00:15:07.375 "zone_management": false, 00:15:07.375 "zone_append": false, 00:15:07.375 "compare": false, 00:15:07.375 "compare_and_write": false, 00:15:07.375 "abort": true, 00:15:07.375 "seek_hole": false, 00:15:07.375 "seek_data": false, 00:15:07.375 "copy": true, 00:15:07.375 "nvme_iov_md": false 00:15:07.375 }, 00:15:07.375 "memory_domains": [ 00:15:07.375 { 00:15:07.375 "dma_device_id": "system", 00:15:07.375 "dma_device_type": 1 00:15:07.375 }, 00:15:07.375 { 00:15:07.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.375 "dma_device_type": 2 00:15:07.375 } 00:15:07.375 ], 00:15:07.375 "driver_specific": { 00:15:07.376 "passthru": { 00:15:07.376 "name": "pt4", 00:15:07.376 "base_bdev_name": "malloc4" 00:15:07.376 } 00:15:07.376 } 00:15:07.376 }' 00:15:07.376 17:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:07.376 17:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:07.376 17:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:07.376 17:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:07.376 17:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:07.376 17:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:07.376 17:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:07.376 17:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:07.376 17:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:07.376 17:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:07.376 17:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:07.376 17:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:07.376 17:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:07.376 17:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:15:07.640 [2024-07-15 17:34:03.278294] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:07.640 17:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=6c329937-42d0-11ef-96ac-773515fba644 00:15:07.640 17:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 6c329937-42d0-11ef-96ac-773515fba644 ']' 00:15:07.640 17:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:07.899 [2024-07-15 17:34:03.558246] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:07.899 [2024-07-15 17:34:03.558285] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:07.899 [2024-07-15 17:34:03.558309] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:07.899 [2024-07-15 17:34:03.558325] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:07.899 [2024-07-15 17:34:03.558329] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20a35b235900 name raid_bdev1, state offline 00:15:07.899 17:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:07.899 17:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:15:08.157 17:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:15:08.157 17:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:15:08.157 17:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:08.157 17:34:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:08.415 17:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:08.415 17:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:08.755 17:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:08.755 17:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:09.013 17:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:09.013 17:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:15:09.271 17:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:09.271 17:34:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:09.528 17:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:15:09.528 17:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:09.528 17:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:15:09.528 17:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:09.528 17:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:09.528 17:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:09.528 17:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:09.528 17:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:09.529 17:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:09.529 17:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:09.529 17:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:09.529 17:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:09.529 17:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:09.786 [2024-07-15 17:34:05.474294] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:09.786 [2024-07-15 17:34:05.474875] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:09.786 [2024-07-15 17:34:05.474896] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:09.786 [2024-07-15 17:34:05.474904] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:09.786 [2024-07-15 17:34:05.474918] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:09.787 [2024-07-15 17:34:05.474973] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:09.787 [2024-07-15 17:34:05.474984] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:09.787 [2024-07-15 17:34:05.474994] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:09.787 [2024-07-15 17:34:05.475002] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:09.787 [2024-07-15 17:34:05.475006] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20a35b235680 name raid_bdev1, state configuring 00:15:09.787 request: 00:15:09.787 { 00:15:09.787 "name": "raid_bdev1", 00:15:09.787 "raid_level": "concat", 00:15:09.787 "base_bdevs": [ 00:15:09.787 "malloc1", 00:15:09.787 "malloc2", 00:15:09.787 "malloc3", 00:15:09.787 "malloc4" 00:15:09.787 ], 00:15:09.787 "strip_size_kb": 64, 00:15:09.787 "superblock": false, 00:15:09.787 "method": "bdev_raid_create", 00:15:09.787 "req_id": 1 00:15:09.787 } 00:15:09.787 Got JSON-RPC error response 00:15:09.787 response: 00:15:09.787 { 00:15:09.787 "code": -17, 00:15:09.787 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:09.787 } 00:15:09.787 17:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:15:09.787 17:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:09.787 17:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:09.787 17:34:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:09.787 17:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:09.787 17:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:15:10.045 17:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:15:10.045 17:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:15:10.045 17:34:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:10.303 [2024-07-15 17:34:05.998305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:10.303 [2024-07-15 17:34:05.998375] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:10.303 [2024-07-15 17:34:05.998387] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20a35b235180 00:15:10.303 [2024-07-15 17:34:05.998395] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:10.303 [2024-07-15 17:34:05.999060] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:10.303 [2024-07-15 17:34:05.999085] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:10.303 [2024-07-15 17:34:05.999110] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:10.303 [2024-07-15 17:34:05.999122] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:10.303 pt1 00:15:10.303 17:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:15:10.303 17:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:10.303 17:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:10.303 17:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:10.303 17:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:10.303 17:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:10.303 17:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:10.303 17:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:10.303 17:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:10.303 17:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:10.303 17:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.303 17:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.562 17:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:10.562 "name": "raid_bdev1", 00:15:10.562 "uuid": "6c329937-42d0-11ef-96ac-773515fba644", 00:15:10.562 "strip_size_kb": 64, 00:15:10.562 "state": "configuring", 00:15:10.562 "raid_level": "concat", 00:15:10.562 "superblock": true, 00:15:10.562 "num_base_bdevs": 4, 00:15:10.562 "num_base_bdevs_discovered": 1, 00:15:10.562 "num_base_bdevs_operational": 4, 00:15:10.562 "base_bdevs_list": [ 00:15:10.562 { 00:15:10.562 "name": "pt1", 00:15:10.562 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:10.562 "is_configured": true, 00:15:10.562 "data_offset": 2048, 00:15:10.562 "data_size": 63488 00:15:10.562 }, 00:15:10.562 { 00:15:10.562 "name": null, 00:15:10.562 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:10.562 "is_configured": false, 00:15:10.562 "data_offset": 2048, 00:15:10.562 "data_size": 63488 00:15:10.562 }, 00:15:10.562 { 00:15:10.562 "name": null, 00:15:10.562 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:10.562 "is_configured": false, 00:15:10.562 "data_offset": 2048, 00:15:10.562 "data_size": 63488 00:15:10.562 }, 00:15:10.562 { 00:15:10.562 "name": null, 00:15:10.562 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:10.562 "is_configured": false, 00:15:10.562 "data_offset": 2048, 00:15:10.562 "data_size": 63488 00:15:10.562 } 00:15:10.562 ] 00:15:10.562 }' 00:15:10.562 17:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:10.562 17:34:06 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:10.821 17:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:15:10.821 17:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:11.078 [2024-07-15 17:34:06.866303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:11.078 [2024-07-15 17:34:06.866371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.078 [2024-07-15 17:34:06.866383] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20a35b234780 00:15:11.078 [2024-07-15 17:34:06.866391] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.078 [2024-07-15 17:34:06.866505] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.078 [2024-07-15 17:34:06.866516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:11.078 [2024-07-15 17:34:06.866539] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:11.078 [2024-07-15 17:34:06.866558] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:11.078 pt2 00:15:11.078 17:34:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:11.336 [2024-07-15 17:34:07.098312] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:11.336 17:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:15:11.336 17:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:11.336 17:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:11.336 17:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:11.336 17:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:11.336 17:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:11.336 17:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:11.336 17:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:11.336 17:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:11.336 17:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:11.336 17:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.336 17:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.594 17:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:11.594 "name": "raid_bdev1", 00:15:11.594 "uuid": "6c329937-42d0-11ef-96ac-773515fba644", 00:15:11.594 "strip_size_kb": 64, 00:15:11.594 "state": "configuring", 00:15:11.594 "raid_level": "concat", 00:15:11.594 "superblock": true, 00:15:11.594 "num_base_bdevs": 4, 00:15:11.594 "num_base_bdevs_discovered": 1, 00:15:11.594 "num_base_bdevs_operational": 4, 00:15:11.594 "base_bdevs_list": [ 00:15:11.594 { 00:15:11.594 "name": "pt1", 00:15:11.594 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:11.594 "is_configured": true, 00:15:11.594 "data_offset": 2048, 00:15:11.594 "data_size": 63488 00:15:11.594 }, 00:15:11.594 { 00:15:11.594 "name": null, 00:15:11.594 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:11.594 "is_configured": false, 00:15:11.594 "data_offset": 2048, 00:15:11.594 "data_size": 63488 00:15:11.594 }, 00:15:11.594 { 00:15:11.594 "name": null, 00:15:11.594 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:11.594 "is_configured": false, 00:15:11.594 "data_offset": 2048, 00:15:11.594 "data_size": 63488 00:15:11.594 }, 00:15:11.594 { 00:15:11.594 "name": null, 00:15:11.594 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:11.594 "is_configured": false, 00:15:11.594 "data_offset": 2048, 00:15:11.594 "data_size": 63488 00:15:11.594 } 00:15:11.594 ] 00:15:11.594 }' 00:15:11.595 17:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:11.595 17:34:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:12.161 17:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:15:12.161 17:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:12.161 17:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:12.161 [2024-07-15 17:34:07.958319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:12.161 [2024-07-15 17:34:07.958414] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.161 [2024-07-15 17:34:07.958426] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20a35b234780 00:15:12.161 [2024-07-15 17:34:07.958434] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.161 [2024-07-15 17:34:07.958554] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.161 [2024-07-15 17:34:07.958564] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:12.161 [2024-07-15 17:34:07.958588] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:12.161 [2024-07-15 17:34:07.958597] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:12.161 pt2 00:15:12.161 17:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:12.161 17:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:12.161 17:34:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:12.419 [2024-07-15 17:34:08.238325] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:12.419 [2024-07-15 17:34:08.238394] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.419 [2024-07-15 17:34:08.238406] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20a35b235b80 00:15:12.419 [2024-07-15 17:34:08.238414] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.419 [2024-07-15 17:34:08.238527] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.419 [2024-07-15 17:34:08.238538] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:12.419 [2024-07-15 17:34:08.238561] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:12.419 [2024-07-15 17:34:08.238579] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:12.419 pt3 00:15:12.677 17:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:12.677 17:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:12.677 17:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:12.677 [2024-07-15 17:34:08.466326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:12.677 [2024-07-15 17:34:08.466375] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.677 [2024-07-15 17:34:08.466386] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x20a35b235900 00:15:12.677 [2024-07-15 17:34:08.466394] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.677 [2024-07-15 17:34:08.466506] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.677 [2024-07-15 17:34:08.466516] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:12.677 [2024-07-15 17:34:08.466538] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:12.677 [2024-07-15 17:34:08.466547] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:12.677 [2024-07-15 17:34:08.466584] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x20a35b234c80 00:15:12.677 [2024-07-15 17:34:08.466589] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:12.677 [2024-07-15 17:34:08.466611] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20a35b297e20 00:15:12.677 [2024-07-15 17:34:08.466662] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x20a35b234c80 00:15:12.677 [2024-07-15 17:34:08.466667] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x20a35b234c80 00:15:12.677 [2024-07-15 17:34:08.466688] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.677 pt4 00:15:12.678 17:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:12.678 17:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:12.678 17:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:12.678 17:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:12.678 17:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:12.678 17:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:12.678 17:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:12.678 17:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:12.678 17:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:12.678 17:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:12.678 17:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:12.678 17:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:12.678 17:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.678 17:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.936 17:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:12.936 "name": "raid_bdev1", 00:15:12.936 "uuid": "6c329937-42d0-11ef-96ac-773515fba644", 00:15:12.936 "strip_size_kb": 64, 00:15:12.936 "state": "online", 00:15:12.936 "raid_level": "concat", 00:15:12.936 "superblock": true, 00:15:12.936 "num_base_bdevs": 4, 00:15:12.936 "num_base_bdevs_discovered": 4, 00:15:12.936 "num_base_bdevs_operational": 4, 00:15:12.936 "base_bdevs_list": [ 00:15:12.936 { 00:15:12.936 "name": "pt1", 00:15:12.936 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:12.936 "is_configured": true, 00:15:12.936 "data_offset": 2048, 00:15:12.936 "data_size": 63488 00:15:12.936 }, 00:15:12.936 { 00:15:12.936 "name": "pt2", 00:15:12.936 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:12.936 "is_configured": true, 00:15:12.936 "data_offset": 2048, 00:15:12.936 "data_size": 63488 00:15:12.936 }, 00:15:12.936 { 00:15:12.936 "name": "pt3", 00:15:12.936 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:12.936 "is_configured": true, 00:15:12.936 "data_offset": 2048, 00:15:12.936 "data_size": 63488 00:15:12.936 }, 00:15:12.936 { 00:15:12.936 "name": "pt4", 00:15:12.936 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:12.936 "is_configured": true, 00:15:12.936 "data_offset": 2048, 00:15:12.936 "data_size": 63488 00:15:12.936 } 00:15:12.936 ] 00:15:12.936 }' 00:15:12.936 17:34:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:12.936 17:34:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:13.503 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:15:13.503 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:13.503 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:13.503 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:13.503 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:13.503 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:13.503 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:13.503 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:13.503 [2024-07-15 17:34:09.270400] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:13.503 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:13.503 "name": "raid_bdev1", 00:15:13.503 "aliases": [ 00:15:13.503 "6c329937-42d0-11ef-96ac-773515fba644" 00:15:13.503 ], 00:15:13.503 "product_name": "Raid Volume", 00:15:13.503 "block_size": 512, 00:15:13.503 "num_blocks": 253952, 00:15:13.503 "uuid": "6c329937-42d0-11ef-96ac-773515fba644", 00:15:13.503 "assigned_rate_limits": { 00:15:13.503 "rw_ios_per_sec": 0, 00:15:13.503 "rw_mbytes_per_sec": 0, 00:15:13.503 "r_mbytes_per_sec": 0, 00:15:13.503 "w_mbytes_per_sec": 0 00:15:13.503 }, 00:15:13.503 "claimed": false, 00:15:13.503 "zoned": false, 00:15:13.503 "supported_io_types": { 00:15:13.503 "read": true, 00:15:13.503 "write": true, 00:15:13.503 "unmap": true, 00:15:13.503 "flush": true, 00:15:13.503 "reset": true, 00:15:13.503 "nvme_admin": false, 00:15:13.503 "nvme_io": false, 00:15:13.503 "nvme_io_md": false, 00:15:13.503 "write_zeroes": true, 00:15:13.503 "zcopy": false, 00:15:13.503 "get_zone_info": false, 00:15:13.503 "zone_management": false, 00:15:13.503 "zone_append": false, 00:15:13.503 "compare": false, 00:15:13.503 "compare_and_write": false, 00:15:13.503 "abort": false, 00:15:13.503 "seek_hole": false, 00:15:13.503 "seek_data": false, 00:15:13.503 "copy": false, 00:15:13.503 "nvme_iov_md": false 00:15:13.503 }, 00:15:13.503 "memory_domains": [ 00:15:13.503 { 00:15:13.503 "dma_device_id": "system", 00:15:13.503 "dma_device_type": 1 00:15:13.503 }, 00:15:13.503 { 00:15:13.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.503 "dma_device_type": 2 00:15:13.503 }, 00:15:13.503 { 00:15:13.503 "dma_device_id": "system", 00:15:13.503 "dma_device_type": 1 00:15:13.503 }, 00:15:13.503 { 00:15:13.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.503 "dma_device_type": 2 00:15:13.503 }, 00:15:13.503 { 00:15:13.503 "dma_device_id": "system", 00:15:13.503 "dma_device_type": 1 00:15:13.503 }, 00:15:13.503 { 00:15:13.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.503 "dma_device_type": 2 00:15:13.503 }, 00:15:13.503 { 00:15:13.503 "dma_device_id": "system", 00:15:13.503 "dma_device_type": 1 00:15:13.503 }, 00:15:13.503 { 00:15:13.503 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.503 "dma_device_type": 2 00:15:13.503 } 00:15:13.503 ], 00:15:13.503 "driver_specific": { 00:15:13.503 "raid": { 00:15:13.503 "uuid": "6c329937-42d0-11ef-96ac-773515fba644", 00:15:13.503 "strip_size_kb": 64, 00:15:13.503 "state": "online", 00:15:13.503 "raid_level": "concat", 00:15:13.503 "superblock": true, 00:15:13.503 "num_base_bdevs": 4, 00:15:13.503 "num_base_bdevs_discovered": 4, 00:15:13.503 "num_base_bdevs_operational": 4, 00:15:13.503 "base_bdevs_list": [ 00:15:13.503 { 00:15:13.503 "name": "pt1", 00:15:13.503 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:13.503 "is_configured": true, 00:15:13.503 "data_offset": 2048, 00:15:13.503 "data_size": 63488 00:15:13.503 }, 00:15:13.503 { 00:15:13.503 "name": "pt2", 00:15:13.503 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:13.503 "is_configured": true, 00:15:13.503 "data_offset": 2048, 00:15:13.503 "data_size": 63488 00:15:13.503 }, 00:15:13.503 { 00:15:13.503 "name": "pt3", 00:15:13.503 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:13.503 "is_configured": true, 00:15:13.503 "data_offset": 2048, 00:15:13.503 "data_size": 63488 00:15:13.503 }, 00:15:13.503 { 00:15:13.503 "name": "pt4", 00:15:13.503 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:13.503 "is_configured": true, 00:15:13.503 "data_offset": 2048, 00:15:13.503 "data_size": 63488 00:15:13.503 } 00:15:13.503 ] 00:15:13.503 } 00:15:13.503 } 00:15:13.503 }' 00:15:13.503 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:13.503 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:13.503 pt2 00:15:13.503 pt3 00:15:13.503 pt4' 00:15:13.503 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:13.503 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:13.503 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:13.762 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:13.762 "name": "pt1", 00:15:13.762 "aliases": [ 00:15:13.762 "00000000-0000-0000-0000-000000000001" 00:15:13.762 ], 00:15:13.762 "product_name": "passthru", 00:15:13.762 "block_size": 512, 00:15:13.762 "num_blocks": 65536, 00:15:13.762 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:13.762 "assigned_rate_limits": { 00:15:13.762 "rw_ios_per_sec": 0, 00:15:13.762 "rw_mbytes_per_sec": 0, 00:15:13.762 "r_mbytes_per_sec": 0, 00:15:13.762 "w_mbytes_per_sec": 0 00:15:13.762 }, 00:15:13.762 "claimed": true, 00:15:13.762 "claim_type": "exclusive_write", 00:15:13.762 "zoned": false, 00:15:13.762 "supported_io_types": { 00:15:13.762 "read": true, 00:15:13.762 "write": true, 00:15:13.762 "unmap": true, 00:15:13.762 "flush": true, 00:15:13.762 "reset": true, 00:15:13.762 "nvme_admin": false, 00:15:13.762 "nvme_io": false, 00:15:13.762 "nvme_io_md": false, 00:15:13.762 "write_zeroes": true, 00:15:13.762 "zcopy": true, 00:15:13.762 "get_zone_info": false, 00:15:13.762 "zone_management": false, 00:15:13.762 "zone_append": false, 00:15:13.762 "compare": false, 00:15:13.762 "compare_and_write": false, 00:15:13.762 "abort": true, 00:15:13.762 "seek_hole": false, 00:15:13.762 "seek_data": false, 00:15:13.762 "copy": true, 00:15:13.762 "nvme_iov_md": false 00:15:13.762 }, 00:15:13.762 "memory_domains": [ 00:15:13.762 { 00:15:13.762 "dma_device_id": "system", 00:15:13.762 "dma_device_type": 1 00:15:13.762 }, 00:15:13.762 { 00:15:13.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:13.762 "dma_device_type": 2 00:15:13.762 } 00:15:13.762 ], 00:15:13.762 "driver_specific": { 00:15:13.762 "passthru": { 00:15:13.762 "name": "pt1", 00:15:13.762 "base_bdev_name": "malloc1" 00:15:13.762 } 00:15:13.762 } 00:15:13.762 }' 00:15:13.762 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:13.762 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:14.020 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:14.020 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:14.020 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:14.020 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:14.020 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:14.020 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:14.020 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:14.020 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:14.020 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:14.020 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:14.020 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:14.020 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:14.020 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:14.278 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:14.278 "name": "pt2", 00:15:14.278 "aliases": [ 00:15:14.278 "00000000-0000-0000-0000-000000000002" 00:15:14.278 ], 00:15:14.278 "product_name": "passthru", 00:15:14.278 "block_size": 512, 00:15:14.278 "num_blocks": 65536, 00:15:14.278 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:14.278 "assigned_rate_limits": { 00:15:14.278 "rw_ios_per_sec": 0, 00:15:14.278 "rw_mbytes_per_sec": 0, 00:15:14.278 "r_mbytes_per_sec": 0, 00:15:14.278 "w_mbytes_per_sec": 0 00:15:14.278 }, 00:15:14.278 "claimed": true, 00:15:14.278 "claim_type": "exclusive_write", 00:15:14.278 "zoned": false, 00:15:14.278 "supported_io_types": { 00:15:14.278 "read": true, 00:15:14.278 "write": true, 00:15:14.278 "unmap": true, 00:15:14.278 "flush": true, 00:15:14.278 "reset": true, 00:15:14.278 "nvme_admin": false, 00:15:14.278 "nvme_io": false, 00:15:14.278 "nvme_io_md": false, 00:15:14.278 "write_zeroes": true, 00:15:14.278 "zcopy": true, 00:15:14.278 "get_zone_info": false, 00:15:14.278 "zone_management": false, 00:15:14.278 "zone_append": false, 00:15:14.278 "compare": false, 00:15:14.278 "compare_and_write": false, 00:15:14.278 "abort": true, 00:15:14.278 "seek_hole": false, 00:15:14.278 "seek_data": false, 00:15:14.278 "copy": true, 00:15:14.278 "nvme_iov_md": false 00:15:14.278 }, 00:15:14.278 "memory_domains": [ 00:15:14.278 { 00:15:14.278 "dma_device_id": "system", 00:15:14.278 "dma_device_type": 1 00:15:14.278 }, 00:15:14.278 { 00:15:14.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.278 "dma_device_type": 2 00:15:14.278 } 00:15:14.278 ], 00:15:14.278 "driver_specific": { 00:15:14.278 "passthru": { 00:15:14.278 "name": "pt2", 00:15:14.278 "base_bdev_name": "malloc2" 00:15:14.278 } 00:15:14.278 } 00:15:14.278 }' 00:15:14.278 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:14.278 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:14.278 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:14.278 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:14.278 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:14.278 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:14.278 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:14.278 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:14.278 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:14.278 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:14.278 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:14.278 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:14.278 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:14.278 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:14.278 17:34:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:14.537 17:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:14.537 "name": "pt3", 00:15:14.537 "aliases": [ 00:15:14.537 "00000000-0000-0000-0000-000000000003" 00:15:14.537 ], 00:15:14.537 "product_name": "passthru", 00:15:14.537 "block_size": 512, 00:15:14.537 "num_blocks": 65536, 00:15:14.537 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:14.537 "assigned_rate_limits": { 00:15:14.537 "rw_ios_per_sec": 0, 00:15:14.537 "rw_mbytes_per_sec": 0, 00:15:14.537 "r_mbytes_per_sec": 0, 00:15:14.537 "w_mbytes_per_sec": 0 00:15:14.537 }, 00:15:14.537 "claimed": true, 00:15:14.537 "claim_type": "exclusive_write", 00:15:14.537 "zoned": false, 00:15:14.537 "supported_io_types": { 00:15:14.537 "read": true, 00:15:14.537 "write": true, 00:15:14.537 "unmap": true, 00:15:14.537 "flush": true, 00:15:14.537 "reset": true, 00:15:14.537 "nvme_admin": false, 00:15:14.537 "nvme_io": false, 00:15:14.537 "nvme_io_md": false, 00:15:14.537 "write_zeroes": true, 00:15:14.537 "zcopy": true, 00:15:14.537 "get_zone_info": false, 00:15:14.537 "zone_management": false, 00:15:14.537 "zone_append": false, 00:15:14.537 "compare": false, 00:15:14.537 "compare_and_write": false, 00:15:14.537 "abort": true, 00:15:14.537 "seek_hole": false, 00:15:14.537 "seek_data": false, 00:15:14.537 "copy": true, 00:15:14.537 "nvme_iov_md": false 00:15:14.537 }, 00:15:14.537 "memory_domains": [ 00:15:14.537 { 00:15:14.537 "dma_device_id": "system", 00:15:14.537 "dma_device_type": 1 00:15:14.537 }, 00:15:14.537 { 00:15:14.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.537 "dma_device_type": 2 00:15:14.537 } 00:15:14.537 ], 00:15:14.537 "driver_specific": { 00:15:14.537 "passthru": { 00:15:14.537 "name": "pt3", 00:15:14.537 "base_bdev_name": "malloc3" 00:15:14.537 } 00:15:14.537 } 00:15:14.537 }' 00:15:14.537 17:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:14.537 17:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:14.537 17:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:14.537 17:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:14.537 17:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:14.537 17:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:14.537 17:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:14.537 17:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:14.537 17:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:14.537 17:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:14.537 17:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:14.537 17:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:14.537 17:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:14.537 17:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:15:14.537 17:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:14.796 17:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:14.796 "name": "pt4", 00:15:14.796 "aliases": [ 00:15:14.796 "00000000-0000-0000-0000-000000000004" 00:15:14.796 ], 00:15:14.796 "product_name": "passthru", 00:15:14.796 "block_size": 512, 00:15:14.796 "num_blocks": 65536, 00:15:14.796 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:14.796 "assigned_rate_limits": { 00:15:14.796 "rw_ios_per_sec": 0, 00:15:14.796 "rw_mbytes_per_sec": 0, 00:15:14.796 "r_mbytes_per_sec": 0, 00:15:14.796 "w_mbytes_per_sec": 0 00:15:14.796 }, 00:15:14.796 "claimed": true, 00:15:14.796 "claim_type": "exclusive_write", 00:15:14.796 "zoned": false, 00:15:14.796 "supported_io_types": { 00:15:14.796 "read": true, 00:15:14.796 "write": true, 00:15:14.796 "unmap": true, 00:15:14.796 "flush": true, 00:15:14.796 "reset": true, 00:15:14.796 "nvme_admin": false, 00:15:14.796 "nvme_io": false, 00:15:14.796 "nvme_io_md": false, 00:15:14.796 "write_zeroes": true, 00:15:14.796 "zcopy": true, 00:15:14.796 "get_zone_info": false, 00:15:14.796 "zone_management": false, 00:15:14.796 "zone_append": false, 00:15:14.796 "compare": false, 00:15:14.796 "compare_and_write": false, 00:15:14.796 "abort": true, 00:15:14.796 "seek_hole": false, 00:15:14.796 "seek_data": false, 00:15:14.796 "copy": true, 00:15:14.796 "nvme_iov_md": false 00:15:14.796 }, 00:15:14.796 "memory_domains": [ 00:15:14.796 { 00:15:14.796 "dma_device_id": "system", 00:15:14.796 "dma_device_type": 1 00:15:14.796 }, 00:15:14.796 { 00:15:14.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:14.796 "dma_device_type": 2 00:15:14.796 } 00:15:14.796 ], 00:15:14.796 "driver_specific": { 00:15:14.796 "passthru": { 00:15:14.796 "name": "pt4", 00:15:14.796 "base_bdev_name": "malloc4" 00:15:14.796 } 00:15:14.796 } 00:15:14.796 }' 00:15:14.796 17:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:14.796 17:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:14.796 17:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:14.796 17:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:15.055 17:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:15.055 17:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:15.055 17:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:15.055 17:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:15.055 17:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:15.055 17:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:15.055 17:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:15.055 17:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:15.055 17:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:15.055 17:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:15:15.314 [2024-07-15 17:34:10.938497] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:15.314 17:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 6c329937-42d0-11ef-96ac-773515fba644 '!=' 6c329937-42d0-11ef-96ac-773515fba644 ']' 00:15:15.314 17:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:15:15.314 17:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:15.314 17:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:15.314 17:34:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 62312 00:15:15.314 17:34:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 62312 ']' 00:15:15.314 17:34:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 62312 00:15:15.314 17:34:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:15:15.314 17:34:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:15:15.314 17:34:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 62312 00:15:15.314 17:34:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:15:15.314 17:34:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:15:15.314 17:34:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:15:15.314 killing process with pid 62312 00:15:15.314 17:34:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62312' 00:15:15.314 17:34:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 62312 00:15:15.314 [2024-07-15 17:34:10.969362] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:15.314 [2024-07-15 17:34:10.969399] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:15.314 [2024-07-15 17:34:10.969415] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:15.314 [2024-07-15 17:34:10.969420] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20a35b234c80 name raid_bdev1, state offline 00:15:15.314 17:34:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 62312 00:15:15.314 [2024-07-15 17:34:10.993247] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:15.573 17:34:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:15:15.573 00:15:15.573 real 0m13.711s 00:15:15.573 user 0m24.451s 00:15:15.573 sys 0m2.163s 00:15:15.573 17:34:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:15.573 17:34:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.573 ************************************ 00:15:15.573 END TEST raid_superblock_test 00:15:15.573 ************************************ 00:15:15.573 17:34:11 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:15.573 17:34:11 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:15:15.573 17:34:11 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:15.573 17:34:11 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:15.573 17:34:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:15.573 ************************************ 00:15:15.573 START TEST raid_read_error_test 00:15:15.573 ************************************ 00:15:15.573 17:34:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 read 00:15:15.573 17:34:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:15:15.573 17:34:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:15:15.573 17:34:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:15:15.573 17:34:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:15.573 17:34:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:15.573 17:34:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:15:15.573 17:34:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:15.573 17:34:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:15.573 17:34:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:15:15.573 17:34:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:15.574 17:34:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:15.574 17:34:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:15:15.574 17:34:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:15.574 17:34:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:15.574 17:34:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:15:15.574 17:34:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:15.574 17:34:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:15.574 17:34:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:15.574 17:34:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:15.574 17:34:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:15.574 17:34:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:15.574 17:34:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:15.574 17:34:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:15.574 17:34:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:15.574 17:34:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:15:15.574 17:34:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:15.574 17:34:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:15.574 17:34:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:15.574 17:34:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.MBiqC9gZGB 00:15:15.574 17:34:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=62713 00:15:15.574 17:34:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 62713 /var/tmp/spdk-raid.sock 00:15:15.574 17:34:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:15.574 17:34:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 62713 ']' 00:15:15.574 17:34:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:15.574 17:34:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:15.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:15.574 17:34:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:15.574 17:34:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:15.574 17:34:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:15.574 [2024-07-15 17:34:11.242127] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:15:15.574 [2024-07-15 17:34:11.242311] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:16.141 EAL: TSC is not safe to use in SMP mode 00:15:16.141 EAL: TSC is not invariant 00:15:16.141 [2024-07-15 17:34:11.821393] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.141 [2024-07-15 17:34:11.905491] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:16.141 [2024-07-15 17:34:11.907576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.141 [2024-07-15 17:34:11.908329] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:16.141 [2024-07-15 17:34:11.908342] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:16.705 17:34:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:16.705 17:34:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:15:16.705 17:34:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:16.705 17:34:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:16.705 BaseBdev1_malloc 00:15:16.705 17:34:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:16.963 true 00:15:17.262 17:34:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:17.262 [2024-07-15 17:34:13.067928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:17.262 [2024-07-15 17:34:13.067993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:17.262 [2024-07-15 17:34:13.068021] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x150b39834780 00:15:17.262 [2024-07-15 17:34:13.068030] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:17.262 [2024-07-15 17:34:13.068762] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:17.262 [2024-07-15 17:34:13.068789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:17.262 BaseBdev1 00:15:17.262 17:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:17.262 17:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:17.520 BaseBdev2_malloc 00:15:17.520 17:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:17.778 true 00:15:17.778 17:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:18.036 [2024-07-15 17:34:13.831953] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:18.036 [2024-07-15 17:34:13.832018] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.036 [2024-07-15 17:34:13.832046] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x150b39834c80 00:15:18.036 [2024-07-15 17:34:13.832055] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.036 [2024-07-15 17:34:13.832759] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.036 [2024-07-15 17:34:13.832784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:18.036 BaseBdev2 00:15:18.036 17:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:18.036 17:34:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:18.293 BaseBdev3_malloc 00:15:18.293 17:34:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:15:18.550 true 00:15:18.808 17:34:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:18.808 [2024-07-15 17:34:14.599958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:18.808 [2024-07-15 17:34:14.600007] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.808 [2024-07-15 17:34:14.600032] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x150b39835180 00:15:18.808 [2024-07-15 17:34:14.600041] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.808 [2024-07-15 17:34:14.600722] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.808 [2024-07-15 17:34:14.600749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:18.808 BaseBdev3 00:15:18.808 17:34:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:18.808 17:34:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:19.372 BaseBdev4_malloc 00:15:19.372 17:34:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:15:19.372 true 00:15:19.372 17:34:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:19.938 [2024-07-15 17:34:15.463965] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:19.938 [2024-07-15 17:34:15.464014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.938 [2024-07-15 17:34:15.464052] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x150b39835680 00:15:19.938 [2024-07-15 17:34:15.464060] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.938 [2024-07-15 17:34:15.464723] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.938 [2024-07-15 17:34:15.464749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:19.938 BaseBdev4 00:15:19.938 17:34:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:15:19.938 [2024-07-15 17:34:15.751994] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:19.938 [2024-07-15 17:34:15.752612] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:19.938 [2024-07-15 17:34:15.752637] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:19.938 [2024-07-15 17:34:15.752652] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:19.938 [2024-07-15 17:34:15.752715] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x150b39835900 00:15:19.938 [2024-07-15 17:34:15.752721] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:19.938 [2024-07-15 17:34:15.752772] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x150b398a0e20 00:15:19.938 [2024-07-15 17:34:15.752845] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x150b39835900 00:15:19.938 [2024-07-15 17:34:15.752850] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x150b39835900 00:15:19.938 [2024-07-15 17:34:15.752876] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.196 17:34:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:20.196 17:34:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:20.196 17:34:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:20.196 17:34:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:20.196 17:34:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:20.196 17:34:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:20.196 17:34:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:20.196 17:34:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:20.196 17:34:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:20.196 17:34:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:20.196 17:34:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.196 17:34:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.454 17:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:20.454 "name": "raid_bdev1", 00:15:20.454 "uuid": "752143fd-42d0-11ef-96ac-773515fba644", 00:15:20.454 "strip_size_kb": 64, 00:15:20.454 "state": "online", 00:15:20.454 "raid_level": "concat", 00:15:20.454 "superblock": true, 00:15:20.454 "num_base_bdevs": 4, 00:15:20.454 "num_base_bdevs_discovered": 4, 00:15:20.454 "num_base_bdevs_operational": 4, 00:15:20.454 "base_bdevs_list": [ 00:15:20.454 { 00:15:20.454 "name": "BaseBdev1", 00:15:20.454 "uuid": "078072ee-0703-1755-b10a-563b44cd68d9", 00:15:20.454 "is_configured": true, 00:15:20.454 "data_offset": 2048, 00:15:20.454 "data_size": 63488 00:15:20.454 }, 00:15:20.454 { 00:15:20.454 "name": "BaseBdev2", 00:15:20.454 "uuid": "a845dfae-bf5c-ee56-96e3-95385627aeda", 00:15:20.454 "is_configured": true, 00:15:20.454 "data_offset": 2048, 00:15:20.454 "data_size": 63488 00:15:20.454 }, 00:15:20.454 { 00:15:20.454 "name": "BaseBdev3", 00:15:20.454 "uuid": "ba5fab46-3348-dd52-a535-a03fedcdbf15", 00:15:20.454 "is_configured": true, 00:15:20.454 "data_offset": 2048, 00:15:20.454 "data_size": 63488 00:15:20.454 }, 00:15:20.454 { 00:15:20.454 "name": "BaseBdev4", 00:15:20.454 "uuid": "fffac597-5b64-085a-8873-a3e8ecff4e11", 00:15:20.454 "is_configured": true, 00:15:20.454 "data_offset": 2048, 00:15:20.454 "data_size": 63488 00:15:20.454 } 00:15:20.454 ] 00:15:20.454 }' 00:15:20.454 17:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:20.454 17:34:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:20.711 17:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:20.711 17:34:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:20.711 [2024-07-15 17:34:16.476208] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x150b398a0ec0 00:15:21.665 17:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:21.923 17:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:21.923 17:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:15:21.923 17:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:15:21.923 17:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:21.923 17:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:21.923 17:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:21.923 17:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:21.923 17:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:21.923 17:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:21.923 17:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:21.923 17:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:21.923 17:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:21.923 17:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:21.923 17:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:21.923 17:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.182 17:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:22.182 "name": "raid_bdev1", 00:15:22.182 "uuid": "752143fd-42d0-11ef-96ac-773515fba644", 00:15:22.182 "strip_size_kb": 64, 00:15:22.182 "state": "online", 00:15:22.182 "raid_level": "concat", 00:15:22.182 "superblock": true, 00:15:22.182 "num_base_bdevs": 4, 00:15:22.182 "num_base_bdevs_discovered": 4, 00:15:22.182 "num_base_bdevs_operational": 4, 00:15:22.182 "base_bdevs_list": [ 00:15:22.182 { 00:15:22.182 "name": "BaseBdev1", 00:15:22.182 "uuid": "078072ee-0703-1755-b10a-563b44cd68d9", 00:15:22.182 "is_configured": true, 00:15:22.182 "data_offset": 2048, 00:15:22.182 "data_size": 63488 00:15:22.182 }, 00:15:22.182 { 00:15:22.182 "name": "BaseBdev2", 00:15:22.182 "uuid": "a845dfae-bf5c-ee56-96e3-95385627aeda", 00:15:22.182 "is_configured": true, 00:15:22.182 "data_offset": 2048, 00:15:22.182 "data_size": 63488 00:15:22.182 }, 00:15:22.182 { 00:15:22.182 "name": "BaseBdev3", 00:15:22.182 "uuid": "ba5fab46-3348-dd52-a535-a03fedcdbf15", 00:15:22.182 "is_configured": true, 00:15:22.182 "data_offset": 2048, 00:15:22.182 "data_size": 63488 00:15:22.182 }, 00:15:22.182 { 00:15:22.182 "name": "BaseBdev4", 00:15:22.182 "uuid": "fffac597-5b64-085a-8873-a3e8ecff4e11", 00:15:22.182 "is_configured": true, 00:15:22.182 "data_offset": 2048, 00:15:22.182 "data_size": 63488 00:15:22.182 } 00:15:22.182 ] 00:15:22.182 }' 00:15:22.182 17:34:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:22.182 17:34:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.440 17:34:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:22.698 [2024-07-15 17:34:18.454400] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:22.698 [2024-07-15 17:34:18.454445] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:22.698 [2024-07-15 17:34:18.454809] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:22.698 [2024-07-15 17:34:18.454819] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.698 [2024-07-15 17:34:18.454843] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:22.698 [2024-07-15 17:34:18.454848] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x150b39835900 name raid_bdev1, state offline 00:15:22.698 0 00:15:22.698 17:34:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 62713 00:15:22.698 17:34:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 62713 ']' 00:15:22.698 17:34:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 62713 00:15:22.698 17:34:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:15:22.698 17:34:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:15:22.698 17:34:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 62713 00:15:22.698 17:34:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:15:22.698 17:34:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:15:22.698 17:34:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:15:22.698 killing process with pid 62713 00:15:22.698 17:34:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62713' 00:15:22.698 17:34:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 62713 00:15:22.698 [2024-07-15 17:34:18.481711] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:22.698 17:34:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 62713 00:15:22.698 [2024-07-15 17:34:18.505371] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:22.956 17:34:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.MBiqC9gZGB 00:15:22.956 17:34:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:22.956 17:34:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:22.956 17:34:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.51 00:15:22.956 17:34:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:15:22.956 17:34:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:22.956 17:34:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:22.956 17:34:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.51 != \0\.\0\0 ]] 00:15:22.956 00:15:22.956 real 0m7.466s 00:15:22.956 user 0m12.083s 00:15:22.956 sys 0m1.168s 00:15:22.956 17:34:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:22.956 17:34:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.956 ************************************ 00:15:22.956 END TEST raid_read_error_test 00:15:22.956 ************************************ 00:15:22.956 17:34:18 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:22.956 17:34:18 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:15:22.956 17:34:18 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:22.956 17:34:18 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:22.956 17:34:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:22.956 ************************************ 00:15:22.956 START TEST raid_write_error_test 00:15:22.956 ************************************ 00:15:22.956 17:34:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 write 00:15:22.956 17:34:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:15:22.956 17:34:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:15:22.956 17:34:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:15:22.956 17:34:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:22.956 17:34:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:22.956 17:34:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:15:22.956 17:34:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:22.957 17:34:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:22.957 17:34:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:15:22.957 17:34:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:22.957 17:34:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:22.957 17:34:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:15:22.957 17:34:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:22.957 17:34:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:22.957 17:34:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:15:22.957 17:34:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:22.957 17:34:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:22.957 17:34:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:22.957 17:34:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:22.957 17:34:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:22.957 17:34:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:22.957 17:34:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:22.957 17:34:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:22.957 17:34:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:22.957 17:34:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:15:22.957 17:34:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:22.957 17:34:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:22.957 17:34:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:22.957 17:34:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.583aqUAuZK 00:15:22.957 17:34:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=62851 00:15:22.957 17:34:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 62851 /var/tmp/spdk-raid.sock 00:15:22.957 17:34:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:22.957 17:34:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 62851 ']' 00:15:22.957 17:34:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:22.957 17:34:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:22.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:22.957 17:34:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:22.957 17:34:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:22.957 17:34:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.957 [2024-07-15 17:34:18.749829] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:15:22.957 [2024-07-15 17:34:18.749991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:23.523 EAL: TSC is not safe to use in SMP mode 00:15:23.523 EAL: TSC is not invariant 00:15:23.523 [2024-07-15 17:34:19.292440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.782 [2024-07-15 17:34:19.382771] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:23.782 [2024-07-15 17:34:19.384921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.782 [2024-07-15 17:34:19.385670] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:23.782 [2024-07-15 17:34:19.385683] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:24.039 17:34:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:24.040 17:34:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:15:24.040 17:34:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:24.040 17:34:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:24.298 BaseBdev1_malloc 00:15:24.298 17:34:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:24.556 true 00:15:24.556 17:34:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:24.815 [2024-07-15 17:34:20.642339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:24.815 [2024-07-15 17:34:20.642403] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:24.815 [2024-07-15 17:34:20.642428] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x99bda234780 00:15:24.815 [2024-07-15 17:34:20.642437] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:24.815 [2024-07-15 17:34:20.643089] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:24.815 [2024-07-15 17:34:20.643125] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:25.073 BaseBdev1 00:15:25.073 17:34:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:25.073 17:34:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:25.073 BaseBdev2_malloc 00:15:25.073 17:34:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:25.331 true 00:15:25.331 17:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:25.590 [2024-07-15 17:34:21.354341] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:25.590 [2024-07-15 17:34:21.354392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:25.590 [2024-07-15 17:34:21.354420] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x99bda234c80 00:15:25.590 [2024-07-15 17:34:21.354437] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:25.590 [2024-07-15 17:34:21.355116] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:25.590 [2024-07-15 17:34:21.355135] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:25.590 BaseBdev2 00:15:25.590 17:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:25.590 17:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:25.847 BaseBdev3_malloc 00:15:25.847 17:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:15:26.104 true 00:15:26.104 17:34:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:26.362 [2024-07-15 17:34:22.130387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:26.362 [2024-07-15 17:34:22.130465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.362 [2024-07-15 17:34:22.130503] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x99bda235180 00:15:26.362 [2024-07-15 17:34:22.130518] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.362 [2024-07-15 17:34:22.131243] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.362 [2024-07-15 17:34:22.131284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:26.362 BaseBdev3 00:15:26.362 17:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:26.362 17:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:26.619 BaseBdev4_malloc 00:15:26.619 17:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:15:26.877 true 00:15:26.877 17:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:27.135 [2024-07-15 17:34:22.858411] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:27.135 [2024-07-15 17:34:22.858499] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:27.135 [2024-07-15 17:34:22.858547] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x99bda235680 00:15:27.135 [2024-07-15 17:34:22.858565] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:27.135 [2024-07-15 17:34:22.859385] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:27.135 [2024-07-15 17:34:22.859425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:27.135 BaseBdev4 00:15:27.135 17:34:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:15:27.394 [2024-07-15 17:34:23.146399] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:27.394 [2024-07-15 17:34:23.147021] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:27.394 [2024-07-15 17:34:23.147048] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:27.394 [2024-07-15 17:34:23.147063] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:27.394 [2024-07-15 17:34:23.147130] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x99bda235900 00:15:27.394 [2024-07-15 17:34:23.147136] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:27.394 [2024-07-15 17:34:23.147175] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x99bda2a0e20 00:15:27.394 [2024-07-15 17:34:23.147272] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x99bda235900 00:15:27.394 [2024-07-15 17:34:23.147277] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x99bda235900 00:15:27.394 [2024-07-15 17:34:23.147305] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:27.394 17:34:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:27.394 17:34:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:27.394 17:34:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:27.394 17:34:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:27.394 17:34:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:27.394 17:34:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:27.394 17:34:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:27.394 17:34:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:27.394 17:34:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:27.394 17:34:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:27.394 17:34:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.394 17:34:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.651 17:34:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:27.651 "name": "raid_bdev1", 00:15:27.651 "uuid": "79898fc9-42d0-11ef-96ac-773515fba644", 00:15:27.651 "strip_size_kb": 64, 00:15:27.651 "state": "online", 00:15:27.651 "raid_level": "concat", 00:15:27.651 "superblock": true, 00:15:27.651 "num_base_bdevs": 4, 00:15:27.651 "num_base_bdevs_discovered": 4, 00:15:27.651 "num_base_bdevs_operational": 4, 00:15:27.651 "base_bdevs_list": [ 00:15:27.651 { 00:15:27.651 "name": "BaseBdev1", 00:15:27.651 "uuid": "e21e0d96-080e-5b50-8dac-1117c3909db3", 00:15:27.651 "is_configured": true, 00:15:27.651 "data_offset": 2048, 00:15:27.651 "data_size": 63488 00:15:27.651 }, 00:15:27.651 { 00:15:27.651 "name": "BaseBdev2", 00:15:27.652 "uuid": "097ff1a0-fa0b-d459-a5f9-7d21edd6d202", 00:15:27.652 "is_configured": true, 00:15:27.652 "data_offset": 2048, 00:15:27.652 "data_size": 63488 00:15:27.652 }, 00:15:27.652 { 00:15:27.652 "name": "BaseBdev3", 00:15:27.652 "uuid": "80e7efb6-ddfb-b752-af13-f833fc2cebc1", 00:15:27.652 "is_configured": true, 00:15:27.652 "data_offset": 2048, 00:15:27.652 "data_size": 63488 00:15:27.652 }, 00:15:27.652 { 00:15:27.652 "name": "BaseBdev4", 00:15:27.652 "uuid": "c4a0dee1-0fb5-405a-a6ed-dcc206b2f40b", 00:15:27.652 "is_configured": true, 00:15:27.652 "data_offset": 2048, 00:15:27.652 "data_size": 63488 00:15:27.652 } 00:15:27.652 ] 00:15:27.652 }' 00:15:27.652 17:34:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:27.652 17:34:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:28.216 17:34:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:28.216 17:34:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:28.216 [2024-07-15 17:34:23.898626] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x99bda2a0ec0 00:15:29.157 17:34:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:29.414 17:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:29.414 17:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:15:29.414 17:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:15:29.414 17:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:29.414 17:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:29.414 17:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:29.414 17:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:29.414 17:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:29.414 17:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:29.414 17:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:29.414 17:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:29.414 17:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:29.414 17:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:29.414 17:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.414 17:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.672 17:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:29.673 "name": "raid_bdev1", 00:15:29.673 "uuid": "79898fc9-42d0-11ef-96ac-773515fba644", 00:15:29.673 "strip_size_kb": 64, 00:15:29.673 "state": "online", 00:15:29.673 "raid_level": "concat", 00:15:29.673 "superblock": true, 00:15:29.673 "num_base_bdevs": 4, 00:15:29.673 "num_base_bdevs_discovered": 4, 00:15:29.673 "num_base_bdevs_operational": 4, 00:15:29.673 "base_bdevs_list": [ 00:15:29.673 { 00:15:29.673 "name": "BaseBdev1", 00:15:29.673 "uuid": "e21e0d96-080e-5b50-8dac-1117c3909db3", 00:15:29.673 "is_configured": true, 00:15:29.673 "data_offset": 2048, 00:15:29.673 "data_size": 63488 00:15:29.673 }, 00:15:29.673 { 00:15:29.673 "name": "BaseBdev2", 00:15:29.673 "uuid": "097ff1a0-fa0b-d459-a5f9-7d21edd6d202", 00:15:29.673 "is_configured": true, 00:15:29.673 "data_offset": 2048, 00:15:29.673 "data_size": 63488 00:15:29.673 }, 00:15:29.673 { 00:15:29.673 "name": "BaseBdev3", 00:15:29.673 "uuid": "80e7efb6-ddfb-b752-af13-f833fc2cebc1", 00:15:29.673 "is_configured": true, 00:15:29.673 "data_offset": 2048, 00:15:29.673 "data_size": 63488 00:15:29.673 }, 00:15:29.673 { 00:15:29.673 "name": "BaseBdev4", 00:15:29.673 "uuid": "c4a0dee1-0fb5-405a-a6ed-dcc206b2f40b", 00:15:29.673 "is_configured": true, 00:15:29.673 "data_offset": 2048, 00:15:29.673 "data_size": 63488 00:15:29.673 } 00:15:29.673 ] 00:15:29.673 }' 00:15:29.673 17:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:29.673 17:34:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.931 17:34:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:30.497 [2024-07-15 17:34:26.020820] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:30.497 [2024-07-15 17:34:26.020846] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:30.497 [2024-07-15 17:34:26.021177] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:30.497 [2024-07-15 17:34:26.021188] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.497 [2024-07-15 17:34:26.021196] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:30.497 [2024-07-15 17:34:26.021201] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x99bda235900 name raid_bdev1, state offline 00:15:30.497 0 00:15:30.497 17:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 62851 00:15:30.497 17:34:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 62851 ']' 00:15:30.497 17:34:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 62851 00:15:30.497 17:34:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:15:30.497 17:34:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:15:30.497 17:34:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 62851 00:15:30.497 17:34:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:15:30.497 17:34:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:15:30.498 17:34:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:15:30.498 killing process with pid 62851 00:15:30.498 17:34:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62851' 00:15:30.498 17:34:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 62851 00:15:30.498 [2024-07-15 17:34:26.048463] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:30.498 17:34:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 62851 00:15:30.498 [2024-07-15 17:34:26.072060] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:30.498 17:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.583aqUAuZK 00:15:30.498 17:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:30.498 17:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:30.498 17:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.47 00:15:30.498 17:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:15:30.498 17:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:30.498 17:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:30.498 17:34:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.47 != \0\.\0\0 ]] 00:15:30.498 00:15:30.498 real 0m7.518s 00:15:30.498 user 0m12.013s 00:15:30.498 sys 0m1.276s 00:15:30.498 17:34:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:30.498 ************************************ 00:15:30.498 END TEST raid_write_error_test 00:15:30.498 ************************************ 00:15:30.498 17:34:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.498 17:34:26 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:30.498 17:34:26 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:15:30.498 17:34:26 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:15:30.498 17:34:26 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:30.498 17:34:26 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:30.498 17:34:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:30.498 ************************************ 00:15:30.498 START TEST raid_state_function_test 00:15:30.498 ************************************ 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 false 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=62991 00:15:30.498 Process raid pid: 62991 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 62991' 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 62991 /var/tmp/spdk-raid.sock 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 62991 ']' 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:30.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:30.498 17:34:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.498 [2024-07-15 17:34:26.308688] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:15:30.498 [2024-07-15 17:34:26.308879] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:31.432 EAL: TSC is not safe to use in SMP mode 00:15:31.432 EAL: TSC is not invariant 00:15:31.432 [2024-07-15 17:34:27.024699] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.432 [2024-07-15 17:34:27.110577] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:31.432 [2024-07-15 17:34:27.112657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:31.432 [2024-07-15 17:34:27.113402] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.432 [2024-07-15 17:34:27.113416] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:31.690 17:34:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:31.690 17:34:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:15:31.690 17:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:31.947 [2024-07-15 17:34:27.605277] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:31.947 [2024-07-15 17:34:27.605327] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:31.947 [2024-07-15 17:34:27.605332] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:31.947 [2024-07-15 17:34:27.605341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:31.947 [2024-07-15 17:34:27.605345] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:31.947 [2024-07-15 17:34:27.605379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:31.947 [2024-07-15 17:34:27.605383] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:31.947 [2024-07-15 17:34:27.605391] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:31.947 17:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:31.947 17:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:31.947 17:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:31.947 17:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:31.947 17:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:31.947 17:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:31.947 17:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:31.947 17:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:31.947 17:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:31.947 17:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:31.947 17:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.947 17:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:32.204 17:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:32.204 "name": "Existed_Raid", 00:15:32.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.204 "strip_size_kb": 0, 00:15:32.204 "state": "configuring", 00:15:32.204 "raid_level": "raid1", 00:15:32.204 "superblock": false, 00:15:32.204 "num_base_bdevs": 4, 00:15:32.204 "num_base_bdevs_discovered": 0, 00:15:32.204 "num_base_bdevs_operational": 4, 00:15:32.204 "base_bdevs_list": [ 00:15:32.204 { 00:15:32.204 "name": "BaseBdev1", 00:15:32.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.204 "is_configured": false, 00:15:32.204 "data_offset": 0, 00:15:32.204 "data_size": 0 00:15:32.204 }, 00:15:32.204 { 00:15:32.204 "name": "BaseBdev2", 00:15:32.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.204 "is_configured": false, 00:15:32.204 "data_offset": 0, 00:15:32.204 "data_size": 0 00:15:32.204 }, 00:15:32.204 { 00:15:32.204 "name": "BaseBdev3", 00:15:32.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.204 "is_configured": false, 00:15:32.204 "data_offset": 0, 00:15:32.204 "data_size": 0 00:15:32.204 }, 00:15:32.204 { 00:15:32.204 "name": "BaseBdev4", 00:15:32.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.204 "is_configured": false, 00:15:32.204 "data_offset": 0, 00:15:32.204 "data_size": 0 00:15:32.204 } 00:15:32.204 ] 00:15:32.204 }' 00:15:32.204 17:34:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:32.204 17:34:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:32.462 17:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:32.720 [2024-07-15 17:34:28.453273] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:32.720 [2024-07-15 17:34:28.453295] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xca563234500 name Existed_Raid, state configuring 00:15:32.720 17:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:32.977 [2024-07-15 17:34:28.677283] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:32.977 [2024-07-15 17:34:28.677342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:32.977 [2024-07-15 17:34:28.677347] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:32.977 [2024-07-15 17:34:28.677355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:32.977 [2024-07-15 17:34:28.677359] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:32.977 [2024-07-15 17:34:28.677366] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:32.977 [2024-07-15 17:34:28.677369] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:32.977 [2024-07-15 17:34:28.677376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:32.977 17:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:33.235 [2024-07-15 17:34:28.914299] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:33.235 BaseBdev1 00:15:33.235 17:34:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:33.235 17:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:33.235 17:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:33.235 17:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:33.235 17:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:33.235 17:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:33.235 17:34:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:33.491 17:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:33.749 [ 00:15:33.749 { 00:15:33.749 "name": "BaseBdev1", 00:15:33.749 "aliases": [ 00:15:33.749 "7cf9869e-42d0-11ef-96ac-773515fba644" 00:15:33.749 ], 00:15:33.749 "product_name": "Malloc disk", 00:15:33.749 "block_size": 512, 00:15:33.749 "num_blocks": 65536, 00:15:33.749 "uuid": "7cf9869e-42d0-11ef-96ac-773515fba644", 00:15:33.749 "assigned_rate_limits": { 00:15:33.749 "rw_ios_per_sec": 0, 00:15:33.749 "rw_mbytes_per_sec": 0, 00:15:33.749 "r_mbytes_per_sec": 0, 00:15:33.749 "w_mbytes_per_sec": 0 00:15:33.749 }, 00:15:33.749 "claimed": true, 00:15:33.749 "claim_type": "exclusive_write", 00:15:33.749 "zoned": false, 00:15:33.749 "supported_io_types": { 00:15:33.749 "read": true, 00:15:33.749 "write": true, 00:15:33.749 "unmap": true, 00:15:33.749 "flush": true, 00:15:33.749 "reset": true, 00:15:33.749 "nvme_admin": false, 00:15:33.749 "nvme_io": false, 00:15:33.749 "nvme_io_md": false, 00:15:33.749 "write_zeroes": true, 00:15:33.749 "zcopy": true, 00:15:33.749 "get_zone_info": false, 00:15:33.749 "zone_management": false, 00:15:33.749 "zone_append": false, 00:15:33.749 "compare": false, 00:15:33.749 "compare_and_write": false, 00:15:33.749 "abort": true, 00:15:33.749 "seek_hole": false, 00:15:33.749 "seek_data": false, 00:15:33.749 "copy": true, 00:15:33.749 "nvme_iov_md": false 00:15:33.749 }, 00:15:33.749 "memory_domains": [ 00:15:33.749 { 00:15:33.749 "dma_device_id": "system", 00:15:33.749 "dma_device_type": 1 00:15:33.749 }, 00:15:33.749 { 00:15:33.749 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.749 "dma_device_type": 2 00:15:33.749 } 00:15:33.749 ], 00:15:33.749 "driver_specific": {} 00:15:33.749 } 00:15:33.749 ] 00:15:33.749 17:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:33.749 17:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:33.749 17:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:33.749 17:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:33.749 17:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:33.749 17:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:33.749 17:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:33.749 17:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:33.749 17:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:33.749 17:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:33.749 17:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:33.749 17:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.749 17:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:34.006 17:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:34.006 "name": "Existed_Raid", 00:15:34.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.006 "strip_size_kb": 0, 00:15:34.006 "state": "configuring", 00:15:34.006 "raid_level": "raid1", 00:15:34.006 "superblock": false, 00:15:34.006 "num_base_bdevs": 4, 00:15:34.006 "num_base_bdevs_discovered": 1, 00:15:34.006 "num_base_bdevs_operational": 4, 00:15:34.006 "base_bdevs_list": [ 00:15:34.006 { 00:15:34.006 "name": "BaseBdev1", 00:15:34.006 "uuid": "7cf9869e-42d0-11ef-96ac-773515fba644", 00:15:34.006 "is_configured": true, 00:15:34.006 "data_offset": 0, 00:15:34.006 "data_size": 65536 00:15:34.006 }, 00:15:34.006 { 00:15:34.006 "name": "BaseBdev2", 00:15:34.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.006 "is_configured": false, 00:15:34.006 "data_offset": 0, 00:15:34.006 "data_size": 0 00:15:34.006 }, 00:15:34.006 { 00:15:34.006 "name": "BaseBdev3", 00:15:34.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.006 "is_configured": false, 00:15:34.006 "data_offset": 0, 00:15:34.006 "data_size": 0 00:15:34.006 }, 00:15:34.006 { 00:15:34.006 "name": "BaseBdev4", 00:15:34.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.006 "is_configured": false, 00:15:34.006 "data_offset": 0, 00:15:34.006 "data_size": 0 00:15:34.006 } 00:15:34.006 ] 00:15:34.006 }' 00:15:34.006 17:34:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:34.006 17:34:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:34.262 17:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:34.538 [2024-07-15 17:34:30.297333] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:34.538 [2024-07-15 17:34:30.297369] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xca563234500 name Existed_Raid, state configuring 00:15:34.538 17:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:34.795 [2024-07-15 17:34:30.557352] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:34.795 [2024-07-15 17:34:30.558182] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:34.795 [2024-07-15 17:34:30.558218] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:34.795 [2024-07-15 17:34:30.558224] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:34.795 [2024-07-15 17:34:30.558232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:34.795 [2024-07-15 17:34:30.558236] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:34.795 [2024-07-15 17:34:30.558243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:34.795 17:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:34.795 17:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:34.795 17:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:34.795 17:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:34.795 17:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:34.795 17:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:34.795 17:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:34.795 17:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:34.795 17:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:34.795 17:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:34.795 17:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:34.795 17:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:34.795 17:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.795 17:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:35.052 17:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:35.052 "name": "Existed_Raid", 00:15:35.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.052 "strip_size_kb": 0, 00:15:35.052 "state": "configuring", 00:15:35.052 "raid_level": "raid1", 00:15:35.052 "superblock": false, 00:15:35.052 "num_base_bdevs": 4, 00:15:35.052 "num_base_bdevs_discovered": 1, 00:15:35.052 "num_base_bdevs_operational": 4, 00:15:35.052 "base_bdevs_list": [ 00:15:35.052 { 00:15:35.052 "name": "BaseBdev1", 00:15:35.052 "uuid": "7cf9869e-42d0-11ef-96ac-773515fba644", 00:15:35.052 "is_configured": true, 00:15:35.052 "data_offset": 0, 00:15:35.052 "data_size": 65536 00:15:35.052 }, 00:15:35.052 { 00:15:35.052 "name": "BaseBdev2", 00:15:35.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.052 "is_configured": false, 00:15:35.052 "data_offset": 0, 00:15:35.052 "data_size": 0 00:15:35.052 }, 00:15:35.052 { 00:15:35.052 "name": "BaseBdev3", 00:15:35.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.052 "is_configured": false, 00:15:35.052 "data_offset": 0, 00:15:35.052 "data_size": 0 00:15:35.052 }, 00:15:35.052 { 00:15:35.052 "name": "BaseBdev4", 00:15:35.052 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:35.052 "is_configured": false, 00:15:35.052 "data_offset": 0, 00:15:35.052 "data_size": 0 00:15:35.052 } 00:15:35.052 ] 00:15:35.052 }' 00:15:35.052 17:34:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:35.052 17:34:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:35.309 17:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:35.567 [2024-07-15 17:34:31.381530] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:35.567 BaseBdev2 00:15:35.825 17:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:35.825 17:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:35.825 17:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:35.825 17:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:35.825 17:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:35.825 17:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:35.825 17:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:36.084 17:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:36.350 [ 00:15:36.350 { 00:15:36.350 "name": "BaseBdev2", 00:15:36.350 "aliases": [ 00:15:36.350 "7e721e69-42d0-11ef-96ac-773515fba644" 00:15:36.350 ], 00:15:36.350 "product_name": "Malloc disk", 00:15:36.350 "block_size": 512, 00:15:36.350 "num_blocks": 65536, 00:15:36.350 "uuid": "7e721e69-42d0-11ef-96ac-773515fba644", 00:15:36.350 "assigned_rate_limits": { 00:15:36.350 "rw_ios_per_sec": 0, 00:15:36.350 "rw_mbytes_per_sec": 0, 00:15:36.350 "r_mbytes_per_sec": 0, 00:15:36.350 "w_mbytes_per_sec": 0 00:15:36.350 }, 00:15:36.350 "claimed": true, 00:15:36.350 "claim_type": "exclusive_write", 00:15:36.350 "zoned": false, 00:15:36.350 "supported_io_types": { 00:15:36.350 "read": true, 00:15:36.350 "write": true, 00:15:36.350 "unmap": true, 00:15:36.350 "flush": true, 00:15:36.350 "reset": true, 00:15:36.350 "nvme_admin": false, 00:15:36.350 "nvme_io": false, 00:15:36.350 "nvme_io_md": false, 00:15:36.350 "write_zeroes": true, 00:15:36.350 "zcopy": true, 00:15:36.350 "get_zone_info": false, 00:15:36.350 "zone_management": false, 00:15:36.350 "zone_append": false, 00:15:36.350 "compare": false, 00:15:36.350 "compare_and_write": false, 00:15:36.350 "abort": true, 00:15:36.350 "seek_hole": false, 00:15:36.350 "seek_data": false, 00:15:36.350 "copy": true, 00:15:36.350 "nvme_iov_md": false 00:15:36.350 }, 00:15:36.350 "memory_domains": [ 00:15:36.350 { 00:15:36.350 "dma_device_id": "system", 00:15:36.350 "dma_device_type": 1 00:15:36.350 }, 00:15:36.350 { 00:15:36.350 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.350 "dma_device_type": 2 00:15:36.350 } 00:15:36.350 ], 00:15:36.350 "driver_specific": {} 00:15:36.350 } 00:15:36.350 ] 00:15:36.350 17:34:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:36.350 17:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:36.350 17:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:36.350 17:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:36.350 17:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:36.350 17:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:36.350 17:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:36.350 17:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:36.350 17:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:36.350 17:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:36.350 17:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:36.350 17:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:36.350 17:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:36.350 17:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.350 17:34:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:36.350 17:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:36.350 "name": "Existed_Raid", 00:15:36.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.350 "strip_size_kb": 0, 00:15:36.350 "state": "configuring", 00:15:36.350 "raid_level": "raid1", 00:15:36.350 "superblock": false, 00:15:36.350 "num_base_bdevs": 4, 00:15:36.350 "num_base_bdevs_discovered": 2, 00:15:36.350 "num_base_bdevs_operational": 4, 00:15:36.350 "base_bdevs_list": [ 00:15:36.350 { 00:15:36.350 "name": "BaseBdev1", 00:15:36.350 "uuid": "7cf9869e-42d0-11ef-96ac-773515fba644", 00:15:36.350 "is_configured": true, 00:15:36.350 "data_offset": 0, 00:15:36.350 "data_size": 65536 00:15:36.350 }, 00:15:36.350 { 00:15:36.350 "name": "BaseBdev2", 00:15:36.350 "uuid": "7e721e69-42d0-11ef-96ac-773515fba644", 00:15:36.350 "is_configured": true, 00:15:36.350 "data_offset": 0, 00:15:36.350 "data_size": 65536 00:15:36.350 }, 00:15:36.350 { 00:15:36.350 "name": "BaseBdev3", 00:15:36.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.350 "is_configured": false, 00:15:36.350 "data_offset": 0, 00:15:36.350 "data_size": 0 00:15:36.350 }, 00:15:36.350 { 00:15:36.350 "name": "BaseBdev4", 00:15:36.350 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.350 "is_configured": false, 00:15:36.350 "data_offset": 0, 00:15:36.350 "data_size": 0 00:15:36.350 } 00:15:36.350 ] 00:15:36.350 }' 00:15:36.350 17:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:36.350 17:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:36.918 17:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:36.918 [2024-07-15 17:34:32.741521] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:36.918 BaseBdev3 00:15:37.176 17:34:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:15:37.176 17:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:37.176 17:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:37.176 17:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:37.176 17:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:37.176 17:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:37.176 17:34:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:37.434 17:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:37.691 [ 00:15:37.691 { 00:15:37.691 "name": "BaseBdev3", 00:15:37.691 "aliases": [ 00:15:37.691 "7f41a519-42d0-11ef-96ac-773515fba644" 00:15:37.691 ], 00:15:37.691 "product_name": "Malloc disk", 00:15:37.691 "block_size": 512, 00:15:37.691 "num_blocks": 65536, 00:15:37.691 "uuid": "7f41a519-42d0-11ef-96ac-773515fba644", 00:15:37.691 "assigned_rate_limits": { 00:15:37.691 "rw_ios_per_sec": 0, 00:15:37.691 "rw_mbytes_per_sec": 0, 00:15:37.691 "r_mbytes_per_sec": 0, 00:15:37.691 "w_mbytes_per_sec": 0 00:15:37.691 }, 00:15:37.691 "claimed": true, 00:15:37.691 "claim_type": "exclusive_write", 00:15:37.691 "zoned": false, 00:15:37.691 "supported_io_types": { 00:15:37.691 "read": true, 00:15:37.691 "write": true, 00:15:37.691 "unmap": true, 00:15:37.691 "flush": true, 00:15:37.691 "reset": true, 00:15:37.691 "nvme_admin": false, 00:15:37.691 "nvme_io": false, 00:15:37.691 "nvme_io_md": false, 00:15:37.691 "write_zeroes": true, 00:15:37.691 "zcopy": true, 00:15:37.691 "get_zone_info": false, 00:15:37.691 "zone_management": false, 00:15:37.691 "zone_append": false, 00:15:37.691 "compare": false, 00:15:37.691 "compare_and_write": false, 00:15:37.691 "abort": true, 00:15:37.691 "seek_hole": false, 00:15:37.691 "seek_data": false, 00:15:37.691 "copy": true, 00:15:37.691 "nvme_iov_md": false 00:15:37.691 }, 00:15:37.691 "memory_domains": [ 00:15:37.691 { 00:15:37.691 "dma_device_id": "system", 00:15:37.691 "dma_device_type": 1 00:15:37.691 }, 00:15:37.691 { 00:15:37.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.691 "dma_device_type": 2 00:15:37.691 } 00:15:37.691 ], 00:15:37.691 "driver_specific": {} 00:15:37.691 } 00:15:37.691 ] 00:15:37.691 17:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:37.692 17:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:37.692 17:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:37.692 17:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:37.692 17:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:37.692 17:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:37.692 17:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:37.692 17:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:37.692 17:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:37.692 17:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:37.692 17:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:37.692 17:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:37.692 17:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:37.692 17:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.692 17:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:37.949 17:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:37.949 "name": "Existed_Raid", 00:15:37.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.949 "strip_size_kb": 0, 00:15:37.949 "state": "configuring", 00:15:37.949 "raid_level": "raid1", 00:15:37.949 "superblock": false, 00:15:37.949 "num_base_bdevs": 4, 00:15:37.949 "num_base_bdevs_discovered": 3, 00:15:37.949 "num_base_bdevs_operational": 4, 00:15:37.949 "base_bdevs_list": [ 00:15:37.949 { 00:15:37.949 "name": "BaseBdev1", 00:15:37.949 "uuid": "7cf9869e-42d0-11ef-96ac-773515fba644", 00:15:37.949 "is_configured": true, 00:15:37.949 "data_offset": 0, 00:15:37.949 "data_size": 65536 00:15:37.949 }, 00:15:37.949 { 00:15:37.949 "name": "BaseBdev2", 00:15:37.949 "uuid": "7e721e69-42d0-11ef-96ac-773515fba644", 00:15:37.949 "is_configured": true, 00:15:37.949 "data_offset": 0, 00:15:37.949 "data_size": 65536 00:15:37.949 }, 00:15:37.949 { 00:15:37.949 "name": "BaseBdev3", 00:15:37.949 "uuid": "7f41a519-42d0-11ef-96ac-773515fba644", 00:15:37.949 "is_configured": true, 00:15:37.949 "data_offset": 0, 00:15:37.949 "data_size": 65536 00:15:37.949 }, 00:15:37.949 { 00:15:37.949 "name": "BaseBdev4", 00:15:37.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.949 "is_configured": false, 00:15:37.949 "data_offset": 0, 00:15:37.949 "data_size": 0 00:15:37.949 } 00:15:37.949 ] 00:15:37.949 }' 00:15:37.949 17:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:37.949 17:34:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.207 17:34:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:38.465 [2024-07-15 17:34:34.121579] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:38.465 [2024-07-15 17:34:34.121606] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xca563234a00 00:15:38.465 [2024-07-15 17:34:34.121611] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:38.465 [2024-07-15 17:34:34.121656] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xca563297e20 00:15:38.465 [2024-07-15 17:34:34.121746] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xca563234a00 00:15:38.465 [2024-07-15 17:34:34.121751] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xca563234a00 00:15:38.465 [2024-07-15 17:34:34.121785] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.465 BaseBdev4 00:15:38.465 17:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:15:38.465 17:34:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:15:38.465 17:34:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:38.465 17:34:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:38.465 17:34:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:38.465 17:34:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:38.465 17:34:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:38.723 17:34:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:38.981 [ 00:15:38.981 { 00:15:38.981 "name": "BaseBdev4", 00:15:38.981 "aliases": [ 00:15:38.981 "8014391c-42d0-11ef-96ac-773515fba644" 00:15:38.981 ], 00:15:38.981 "product_name": "Malloc disk", 00:15:38.981 "block_size": 512, 00:15:38.981 "num_blocks": 65536, 00:15:38.981 "uuid": "8014391c-42d0-11ef-96ac-773515fba644", 00:15:38.981 "assigned_rate_limits": { 00:15:38.981 "rw_ios_per_sec": 0, 00:15:38.981 "rw_mbytes_per_sec": 0, 00:15:38.981 "r_mbytes_per_sec": 0, 00:15:38.981 "w_mbytes_per_sec": 0 00:15:38.981 }, 00:15:38.981 "claimed": true, 00:15:38.981 "claim_type": "exclusive_write", 00:15:38.981 "zoned": false, 00:15:38.981 "supported_io_types": { 00:15:38.981 "read": true, 00:15:38.981 "write": true, 00:15:38.981 "unmap": true, 00:15:38.981 "flush": true, 00:15:38.981 "reset": true, 00:15:38.981 "nvme_admin": false, 00:15:38.981 "nvme_io": false, 00:15:38.981 "nvme_io_md": false, 00:15:38.981 "write_zeroes": true, 00:15:38.981 "zcopy": true, 00:15:38.981 "get_zone_info": false, 00:15:38.981 "zone_management": false, 00:15:38.981 "zone_append": false, 00:15:38.981 "compare": false, 00:15:38.981 "compare_and_write": false, 00:15:38.981 "abort": true, 00:15:38.981 "seek_hole": false, 00:15:38.981 "seek_data": false, 00:15:38.981 "copy": true, 00:15:38.981 "nvme_iov_md": false 00:15:38.981 }, 00:15:38.981 "memory_domains": [ 00:15:38.981 { 00:15:38.981 "dma_device_id": "system", 00:15:38.981 "dma_device_type": 1 00:15:38.981 }, 00:15:38.981 { 00:15:38.981 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:38.981 "dma_device_type": 2 00:15:38.981 } 00:15:38.981 ], 00:15:38.981 "driver_specific": {} 00:15:38.981 } 00:15:38.981 ] 00:15:38.981 17:34:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:38.981 17:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:38.981 17:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:38.981 17:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:38.981 17:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:38.981 17:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:38.981 17:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:38.981 17:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:38.981 17:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:38.981 17:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:38.981 17:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:38.981 17:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:38.981 17:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:38.981 17:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.981 17:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.239 17:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:39.239 "name": "Existed_Raid", 00:15:39.239 "uuid": "8014408d-42d0-11ef-96ac-773515fba644", 00:15:39.239 "strip_size_kb": 0, 00:15:39.239 "state": "online", 00:15:39.239 "raid_level": "raid1", 00:15:39.239 "superblock": false, 00:15:39.239 "num_base_bdevs": 4, 00:15:39.239 "num_base_bdevs_discovered": 4, 00:15:39.239 "num_base_bdevs_operational": 4, 00:15:39.239 "base_bdevs_list": [ 00:15:39.239 { 00:15:39.239 "name": "BaseBdev1", 00:15:39.239 "uuid": "7cf9869e-42d0-11ef-96ac-773515fba644", 00:15:39.239 "is_configured": true, 00:15:39.239 "data_offset": 0, 00:15:39.239 "data_size": 65536 00:15:39.239 }, 00:15:39.239 { 00:15:39.239 "name": "BaseBdev2", 00:15:39.239 "uuid": "7e721e69-42d0-11ef-96ac-773515fba644", 00:15:39.239 "is_configured": true, 00:15:39.239 "data_offset": 0, 00:15:39.239 "data_size": 65536 00:15:39.239 }, 00:15:39.239 { 00:15:39.239 "name": "BaseBdev3", 00:15:39.239 "uuid": "7f41a519-42d0-11ef-96ac-773515fba644", 00:15:39.239 "is_configured": true, 00:15:39.239 "data_offset": 0, 00:15:39.239 "data_size": 65536 00:15:39.239 }, 00:15:39.239 { 00:15:39.239 "name": "BaseBdev4", 00:15:39.239 "uuid": "8014391c-42d0-11ef-96ac-773515fba644", 00:15:39.239 "is_configured": true, 00:15:39.239 "data_offset": 0, 00:15:39.239 "data_size": 65536 00:15:39.239 } 00:15:39.239 ] 00:15:39.239 }' 00:15:39.239 17:34:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:39.239 17:34:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.497 17:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:39.497 17:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:39.497 17:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:39.497 17:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:39.497 17:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:39.497 17:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:39.497 17:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:39.497 17:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:39.757 [2024-07-15 17:34:35.429508] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:39.757 17:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:39.757 "name": "Existed_Raid", 00:15:39.757 "aliases": [ 00:15:39.757 "8014408d-42d0-11ef-96ac-773515fba644" 00:15:39.757 ], 00:15:39.757 "product_name": "Raid Volume", 00:15:39.757 "block_size": 512, 00:15:39.757 "num_blocks": 65536, 00:15:39.757 "uuid": "8014408d-42d0-11ef-96ac-773515fba644", 00:15:39.757 "assigned_rate_limits": { 00:15:39.757 "rw_ios_per_sec": 0, 00:15:39.757 "rw_mbytes_per_sec": 0, 00:15:39.757 "r_mbytes_per_sec": 0, 00:15:39.757 "w_mbytes_per_sec": 0 00:15:39.757 }, 00:15:39.757 "claimed": false, 00:15:39.757 "zoned": false, 00:15:39.757 "supported_io_types": { 00:15:39.757 "read": true, 00:15:39.757 "write": true, 00:15:39.757 "unmap": false, 00:15:39.757 "flush": false, 00:15:39.757 "reset": true, 00:15:39.757 "nvme_admin": false, 00:15:39.757 "nvme_io": false, 00:15:39.757 "nvme_io_md": false, 00:15:39.757 "write_zeroes": true, 00:15:39.757 "zcopy": false, 00:15:39.757 "get_zone_info": false, 00:15:39.757 "zone_management": false, 00:15:39.757 "zone_append": false, 00:15:39.757 "compare": false, 00:15:39.757 "compare_and_write": false, 00:15:39.757 "abort": false, 00:15:39.757 "seek_hole": false, 00:15:39.757 "seek_data": false, 00:15:39.757 "copy": false, 00:15:39.757 "nvme_iov_md": false 00:15:39.757 }, 00:15:39.757 "memory_domains": [ 00:15:39.757 { 00:15:39.757 "dma_device_id": "system", 00:15:39.757 "dma_device_type": 1 00:15:39.757 }, 00:15:39.757 { 00:15:39.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.757 "dma_device_type": 2 00:15:39.757 }, 00:15:39.757 { 00:15:39.757 "dma_device_id": "system", 00:15:39.757 "dma_device_type": 1 00:15:39.757 }, 00:15:39.757 { 00:15:39.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.757 "dma_device_type": 2 00:15:39.757 }, 00:15:39.757 { 00:15:39.757 "dma_device_id": "system", 00:15:39.757 "dma_device_type": 1 00:15:39.757 }, 00:15:39.757 { 00:15:39.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.757 "dma_device_type": 2 00:15:39.757 }, 00:15:39.757 { 00:15:39.757 "dma_device_id": "system", 00:15:39.757 "dma_device_type": 1 00:15:39.757 }, 00:15:39.757 { 00:15:39.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:39.757 "dma_device_type": 2 00:15:39.757 } 00:15:39.757 ], 00:15:39.757 "driver_specific": { 00:15:39.757 "raid": { 00:15:39.757 "uuid": "8014408d-42d0-11ef-96ac-773515fba644", 00:15:39.757 "strip_size_kb": 0, 00:15:39.757 "state": "online", 00:15:39.757 "raid_level": "raid1", 00:15:39.757 "superblock": false, 00:15:39.757 "num_base_bdevs": 4, 00:15:39.757 "num_base_bdevs_discovered": 4, 00:15:39.757 "num_base_bdevs_operational": 4, 00:15:39.757 "base_bdevs_list": [ 00:15:39.757 { 00:15:39.757 "name": "BaseBdev1", 00:15:39.757 "uuid": "7cf9869e-42d0-11ef-96ac-773515fba644", 00:15:39.757 "is_configured": true, 00:15:39.757 "data_offset": 0, 00:15:39.757 "data_size": 65536 00:15:39.757 }, 00:15:39.757 { 00:15:39.757 "name": "BaseBdev2", 00:15:39.757 "uuid": "7e721e69-42d0-11ef-96ac-773515fba644", 00:15:39.757 "is_configured": true, 00:15:39.757 "data_offset": 0, 00:15:39.757 "data_size": 65536 00:15:39.757 }, 00:15:39.757 { 00:15:39.757 "name": "BaseBdev3", 00:15:39.757 "uuid": "7f41a519-42d0-11ef-96ac-773515fba644", 00:15:39.757 "is_configured": true, 00:15:39.757 "data_offset": 0, 00:15:39.757 "data_size": 65536 00:15:39.757 }, 00:15:39.757 { 00:15:39.757 "name": "BaseBdev4", 00:15:39.757 "uuid": "8014391c-42d0-11ef-96ac-773515fba644", 00:15:39.757 "is_configured": true, 00:15:39.757 "data_offset": 0, 00:15:39.757 "data_size": 65536 00:15:39.757 } 00:15:39.757 ] 00:15:39.757 } 00:15:39.757 } 00:15:39.757 }' 00:15:39.757 17:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:39.757 17:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:39.757 BaseBdev2 00:15:39.757 BaseBdev3 00:15:39.757 BaseBdev4' 00:15:39.757 17:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:39.757 17:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:39.757 17:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:40.016 17:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:40.016 "name": "BaseBdev1", 00:15:40.016 "aliases": [ 00:15:40.016 "7cf9869e-42d0-11ef-96ac-773515fba644" 00:15:40.016 ], 00:15:40.016 "product_name": "Malloc disk", 00:15:40.016 "block_size": 512, 00:15:40.016 "num_blocks": 65536, 00:15:40.016 "uuid": "7cf9869e-42d0-11ef-96ac-773515fba644", 00:15:40.016 "assigned_rate_limits": { 00:15:40.016 "rw_ios_per_sec": 0, 00:15:40.016 "rw_mbytes_per_sec": 0, 00:15:40.016 "r_mbytes_per_sec": 0, 00:15:40.016 "w_mbytes_per_sec": 0 00:15:40.016 }, 00:15:40.016 "claimed": true, 00:15:40.016 "claim_type": "exclusive_write", 00:15:40.016 "zoned": false, 00:15:40.016 "supported_io_types": { 00:15:40.016 "read": true, 00:15:40.016 "write": true, 00:15:40.016 "unmap": true, 00:15:40.016 "flush": true, 00:15:40.016 "reset": true, 00:15:40.016 "nvme_admin": false, 00:15:40.016 "nvme_io": false, 00:15:40.016 "nvme_io_md": false, 00:15:40.016 "write_zeroes": true, 00:15:40.016 "zcopy": true, 00:15:40.016 "get_zone_info": false, 00:15:40.016 "zone_management": false, 00:15:40.016 "zone_append": false, 00:15:40.016 "compare": false, 00:15:40.016 "compare_and_write": false, 00:15:40.016 "abort": true, 00:15:40.016 "seek_hole": false, 00:15:40.016 "seek_data": false, 00:15:40.016 "copy": true, 00:15:40.016 "nvme_iov_md": false 00:15:40.016 }, 00:15:40.016 "memory_domains": [ 00:15:40.016 { 00:15:40.016 "dma_device_id": "system", 00:15:40.016 "dma_device_type": 1 00:15:40.016 }, 00:15:40.016 { 00:15:40.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.016 "dma_device_type": 2 00:15:40.016 } 00:15:40.016 ], 00:15:40.016 "driver_specific": {} 00:15:40.016 }' 00:15:40.016 17:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:40.016 17:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:40.016 17:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:40.016 17:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:40.016 17:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:40.016 17:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:40.016 17:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:40.016 17:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:40.016 17:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:40.016 17:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:40.016 17:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:40.016 17:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:40.016 17:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:40.016 17:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:40.016 17:34:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:40.275 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:40.275 "name": "BaseBdev2", 00:15:40.275 "aliases": [ 00:15:40.275 "7e721e69-42d0-11ef-96ac-773515fba644" 00:15:40.275 ], 00:15:40.275 "product_name": "Malloc disk", 00:15:40.275 "block_size": 512, 00:15:40.275 "num_blocks": 65536, 00:15:40.275 "uuid": "7e721e69-42d0-11ef-96ac-773515fba644", 00:15:40.275 "assigned_rate_limits": { 00:15:40.275 "rw_ios_per_sec": 0, 00:15:40.275 "rw_mbytes_per_sec": 0, 00:15:40.275 "r_mbytes_per_sec": 0, 00:15:40.275 "w_mbytes_per_sec": 0 00:15:40.275 }, 00:15:40.275 "claimed": true, 00:15:40.275 "claim_type": "exclusive_write", 00:15:40.275 "zoned": false, 00:15:40.275 "supported_io_types": { 00:15:40.275 "read": true, 00:15:40.275 "write": true, 00:15:40.275 "unmap": true, 00:15:40.275 "flush": true, 00:15:40.275 "reset": true, 00:15:40.275 "nvme_admin": false, 00:15:40.275 "nvme_io": false, 00:15:40.275 "nvme_io_md": false, 00:15:40.275 "write_zeroes": true, 00:15:40.275 "zcopy": true, 00:15:40.275 "get_zone_info": false, 00:15:40.275 "zone_management": false, 00:15:40.275 "zone_append": false, 00:15:40.275 "compare": false, 00:15:40.275 "compare_and_write": false, 00:15:40.275 "abort": true, 00:15:40.275 "seek_hole": false, 00:15:40.275 "seek_data": false, 00:15:40.275 "copy": true, 00:15:40.275 "nvme_iov_md": false 00:15:40.275 }, 00:15:40.275 "memory_domains": [ 00:15:40.275 { 00:15:40.275 "dma_device_id": "system", 00:15:40.275 "dma_device_type": 1 00:15:40.275 }, 00:15:40.275 { 00:15:40.275 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.275 "dma_device_type": 2 00:15:40.275 } 00:15:40.275 ], 00:15:40.275 "driver_specific": {} 00:15:40.275 }' 00:15:40.275 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:40.275 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:40.275 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:40.275 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:40.275 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:40.275 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:40.275 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:40.275 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:40.275 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:40.275 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:40.275 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:40.533 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:40.533 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:40.533 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:40.533 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:40.791 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:40.791 "name": "BaseBdev3", 00:15:40.791 "aliases": [ 00:15:40.791 "7f41a519-42d0-11ef-96ac-773515fba644" 00:15:40.791 ], 00:15:40.791 "product_name": "Malloc disk", 00:15:40.791 "block_size": 512, 00:15:40.791 "num_blocks": 65536, 00:15:40.791 "uuid": "7f41a519-42d0-11ef-96ac-773515fba644", 00:15:40.791 "assigned_rate_limits": { 00:15:40.791 "rw_ios_per_sec": 0, 00:15:40.791 "rw_mbytes_per_sec": 0, 00:15:40.791 "r_mbytes_per_sec": 0, 00:15:40.791 "w_mbytes_per_sec": 0 00:15:40.791 }, 00:15:40.791 "claimed": true, 00:15:40.791 "claim_type": "exclusive_write", 00:15:40.791 "zoned": false, 00:15:40.791 "supported_io_types": { 00:15:40.791 "read": true, 00:15:40.791 "write": true, 00:15:40.791 "unmap": true, 00:15:40.791 "flush": true, 00:15:40.791 "reset": true, 00:15:40.791 "nvme_admin": false, 00:15:40.791 "nvme_io": false, 00:15:40.791 "nvme_io_md": false, 00:15:40.791 "write_zeroes": true, 00:15:40.791 "zcopy": true, 00:15:40.791 "get_zone_info": false, 00:15:40.791 "zone_management": false, 00:15:40.791 "zone_append": false, 00:15:40.791 "compare": false, 00:15:40.791 "compare_and_write": false, 00:15:40.791 "abort": true, 00:15:40.791 "seek_hole": false, 00:15:40.791 "seek_data": false, 00:15:40.791 "copy": true, 00:15:40.791 "nvme_iov_md": false 00:15:40.791 }, 00:15:40.791 "memory_domains": [ 00:15:40.791 { 00:15:40.791 "dma_device_id": "system", 00:15:40.791 "dma_device_type": 1 00:15:40.791 }, 00:15:40.791 { 00:15:40.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.792 "dma_device_type": 2 00:15:40.792 } 00:15:40.792 ], 00:15:40.792 "driver_specific": {} 00:15:40.792 }' 00:15:40.792 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:40.792 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:40.792 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:40.792 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:40.792 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:40.792 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:40.792 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:40.792 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:40.792 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:40.792 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:40.792 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:40.792 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:40.792 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:40.792 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:40.792 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:41.050 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:41.050 "name": "BaseBdev4", 00:15:41.050 "aliases": [ 00:15:41.050 "8014391c-42d0-11ef-96ac-773515fba644" 00:15:41.050 ], 00:15:41.050 "product_name": "Malloc disk", 00:15:41.050 "block_size": 512, 00:15:41.050 "num_blocks": 65536, 00:15:41.050 "uuid": "8014391c-42d0-11ef-96ac-773515fba644", 00:15:41.050 "assigned_rate_limits": { 00:15:41.050 "rw_ios_per_sec": 0, 00:15:41.050 "rw_mbytes_per_sec": 0, 00:15:41.050 "r_mbytes_per_sec": 0, 00:15:41.050 "w_mbytes_per_sec": 0 00:15:41.050 }, 00:15:41.050 "claimed": true, 00:15:41.050 "claim_type": "exclusive_write", 00:15:41.050 "zoned": false, 00:15:41.050 "supported_io_types": { 00:15:41.050 "read": true, 00:15:41.050 "write": true, 00:15:41.050 "unmap": true, 00:15:41.050 "flush": true, 00:15:41.050 "reset": true, 00:15:41.050 "nvme_admin": false, 00:15:41.050 "nvme_io": false, 00:15:41.050 "nvme_io_md": false, 00:15:41.050 "write_zeroes": true, 00:15:41.050 "zcopy": true, 00:15:41.050 "get_zone_info": false, 00:15:41.050 "zone_management": false, 00:15:41.050 "zone_append": false, 00:15:41.050 "compare": false, 00:15:41.050 "compare_and_write": false, 00:15:41.050 "abort": true, 00:15:41.050 "seek_hole": false, 00:15:41.050 "seek_data": false, 00:15:41.050 "copy": true, 00:15:41.050 "nvme_iov_md": false 00:15:41.050 }, 00:15:41.050 "memory_domains": [ 00:15:41.050 { 00:15:41.050 "dma_device_id": "system", 00:15:41.050 "dma_device_type": 1 00:15:41.050 }, 00:15:41.050 { 00:15:41.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.050 "dma_device_type": 2 00:15:41.050 } 00:15:41.050 ], 00:15:41.050 "driver_specific": {} 00:15:41.050 }' 00:15:41.050 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:41.050 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:41.050 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:41.050 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:41.050 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:41.050 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:41.050 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:41.050 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:41.050 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:41.050 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:41.050 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:41.050 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:41.050 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:41.308 [2024-07-15 17:34:36.961591] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:41.308 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:41.308 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:15:41.308 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:41.308 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:15:41.308 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:15:41.308 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:41.308 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:41.308 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:41.308 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:41.308 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:41.308 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:41.308 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:41.308 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:41.308 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:41.308 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:41.308 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.308 17:34:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.566 17:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:41.566 "name": "Existed_Raid", 00:15:41.566 "uuid": "8014408d-42d0-11ef-96ac-773515fba644", 00:15:41.566 "strip_size_kb": 0, 00:15:41.566 "state": "online", 00:15:41.566 "raid_level": "raid1", 00:15:41.566 "superblock": false, 00:15:41.566 "num_base_bdevs": 4, 00:15:41.566 "num_base_bdevs_discovered": 3, 00:15:41.566 "num_base_bdevs_operational": 3, 00:15:41.566 "base_bdevs_list": [ 00:15:41.566 { 00:15:41.566 "name": null, 00:15:41.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.566 "is_configured": false, 00:15:41.566 "data_offset": 0, 00:15:41.566 "data_size": 65536 00:15:41.566 }, 00:15:41.566 { 00:15:41.566 "name": "BaseBdev2", 00:15:41.566 "uuid": "7e721e69-42d0-11ef-96ac-773515fba644", 00:15:41.566 "is_configured": true, 00:15:41.566 "data_offset": 0, 00:15:41.566 "data_size": 65536 00:15:41.566 }, 00:15:41.566 { 00:15:41.566 "name": "BaseBdev3", 00:15:41.566 "uuid": "7f41a519-42d0-11ef-96ac-773515fba644", 00:15:41.566 "is_configured": true, 00:15:41.566 "data_offset": 0, 00:15:41.566 "data_size": 65536 00:15:41.566 }, 00:15:41.567 { 00:15:41.567 "name": "BaseBdev4", 00:15:41.567 "uuid": "8014391c-42d0-11ef-96ac-773515fba644", 00:15:41.567 "is_configured": true, 00:15:41.567 "data_offset": 0, 00:15:41.567 "data_size": 65536 00:15:41.567 } 00:15:41.567 ] 00:15:41.567 }' 00:15:41.567 17:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:41.567 17:34:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.825 17:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:41.825 17:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:41.825 17:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.825 17:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:42.085 17:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:42.085 17:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:42.085 17:34:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:42.344 [2024-07-15 17:34:38.115483] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:42.344 17:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:42.344 17:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:42.344 17:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.344 17:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:42.601 17:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:42.601 17:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:42.601 17:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:42.858 [2024-07-15 17:34:38.653200] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:42.858 17:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:42.858 17:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:42.858 17:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.858 17:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:43.116 17:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:43.116 17:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:43.116 17:34:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:15:43.373 [2024-07-15 17:34:39.158940] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:43.373 [2024-07-15 17:34:39.158973] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:43.373 [2024-07-15 17:34:39.164702] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:43.373 [2024-07-15 17:34:39.164719] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:43.373 [2024-07-15 17:34:39.164723] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xca563234a00 name Existed_Raid, state offline 00:15:43.373 17:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:43.373 17:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:43.373 17:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.373 17:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:43.630 17:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:43.630 17:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:43.630 17:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:15:43.630 17:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:15:43.630 17:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:43.630 17:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:43.888 BaseBdev2 00:15:43.888 17:34:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:15:43.888 17:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:43.888 17:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:43.888 17:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:43.888 17:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:43.888 17:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:43.888 17:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:44.146 17:34:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:44.404 [ 00:15:44.404 { 00:15:44.404 "name": "BaseBdev2", 00:15:44.404 "aliases": [ 00:15:44.404 "8361720a-42d0-11ef-96ac-773515fba644" 00:15:44.404 ], 00:15:44.404 "product_name": "Malloc disk", 00:15:44.404 "block_size": 512, 00:15:44.404 "num_blocks": 65536, 00:15:44.404 "uuid": "8361720a-42d0-11ef-96ac-773515fba644", 00:15:44.404 "assigned_rate_limits": { 00:15:44.404 "rw_ios_per_sec": 0, 00:15:44.404 "rw_mbytes_per_sec": 0, 00:15:44.404 "r_mbytes_per_sec": 0, 00:15:44.404 "w_mbytes_per_sec": 0 00:15:44.404 }, 00:15:44.404 "claimed": false, 00:15:44.404 "zoned": false, 00:15:44.404 "supported_io_types": { 00:15:44.404 "read": true, 00:15:44.404 "write": true, 00:15:44.404 "unmap": true, 00:15:44.404 "flush": true, 00:15:44.404 "reset": true, 00:15:44.404 "nvme_admin": false, 00:15:44.404 "nvme_io": false, 00:15:44.404 "nvme_io_md": false, 00:15:44.404 "write_zeroes": true, 00:15:44.404 "zcopy": true, 00:15:44.404 "get_zone_info": false, 00:15:44.404 "zone_management": false, 00:15:44.404 "zone_append": false, 00:15:44.404 "compare": false, 00:15:44.404 "compare_and_write": false, 00:15:44.404 "abort": true, 00:15:44.404 "seek_hole": false, 00:15:44.404 "seek_data": false, 00:15:44.404 "copy": true, 00:15:44.404 "nvme_iov_md": false 00:15:44.404 }, 00:15:44.404 "memory_domains": [ 00:15:44.404 { 00:15:44.404 "dma_device_id": "system", 00:15:44.404 "dma_device_type": 1 00:15:44.404 }, 00:15:44.404 { 00:15:44.404 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.404 "dma_device_type": 2 00:15:44.404 } 00:15:44.404 ], 00:15:44.404 "driver_specific": {} 00:15:44.404 } 00:15:44.404 ] 00:15:44.662 17:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:44.662 17:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:44.662 17:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:44.662 17:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:44.920 BaseBdev3 00:15:44.920 17:34:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:15:44.920 17:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:44.920 17:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:44.920 17:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:44.920 17:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:44.920 17:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:44.920 17:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:45.178 17:34:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:45.178 [ 00:15:45.178 { 00:15:45.178 "name": "BaseBdev3", 00:15:45.178 "aliases": [ 00:15:45.178 "83e19ee8-42d0-11ef-96ac-773515fba644" 00:15:45.178 ], 00:15:45.178 "product_name": "Malloc disk", 00:15:45.178 "block_size": 512, 00:15:45.178 "num_blocks": 65536, 00:15:45.178 "uuid": "83e19ee8-42d0-11ef-96ac-773515fba644", 00:15:45.178 "assigned_rate_limits": { 00:15:45.178 "rw_ios_per_sec": 0, 00:15:45.178 "rw_mbytes_per_sec": 0, 00:15:45.178 "r_mbytes_per_sec": 0, 00:15:45.178 "w_mbytes_per_sec": 0 00:15:45.178 }, 00:15:45.178 "claimed": false, 00:15:45.178 "zoned": false, 00:15:45.178 "supported_io_types": { 00:15:45.178 "read": true, 00:15:45.178 "write": true, 00:15:45.178 "unmap": true, 00:15:45.178 "flush": true, 00:15:45.178 "reset": true, 00:15:45.178 "nvme_admin": false, 00:15:45.178 "nvme_io": false, 00:15:45.178 "nvme_io_md": false, 00:15:45.178 "write_zeroes": true, 00:15:45.178 "zcopy": true, 00:15:45.178 "get_zone_info": false, 00:15:45.178 "zone_management": false, 00:15:45.178 "zone_append": false, 00:15:45.178 "compare": false, 00:15:45.178 "compare_and_write": false, 00:15:45.178 "abort": true, 00:15:45.178 "seek_hole": false, 00:15:45.178 "seek_data": false, 00:15:45.178 "copy": true, 00:15:45.178 "nvme_iov_md": false 00:15:45.178 }, 00:15:45.178 "memory_domains": [ 00:15:45.178 { 00:15:45.178 "dma_device_id": "system", 00:15:45.178 "dma_device_type": 1 00:15:45.178 }, 00:15:45.178 { 00:15:45.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.178 "dma_device_type": 2 00:15:45.178 } 00:15:45.178 ], 00:15:45.178 "driver_specific": {} 00:15:45.178 } 00:15:45.178 ] 00:15:45.178 17:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:45.178 17:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:45.178 17:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:45.178 17:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:45.436 BaseBdev4 00:15:45.695 17:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:15:45.695 17:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:15:45.695 17:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:45.695 17:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:45.695 17:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:45.695 17:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:45.695 17:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:45.954 17:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:45.954 [ 00:15:45.954 { 00:15:45.954 "name": "BaseBdev4", 00:15:45.954 "aliases": [ 00:15:45.954 "8454fa85-42d0-11ef-96ac-773515fba644" 00:15:45.954 ], 00:15:45.954 "product_name": "Malloc disk", 00:15:45.954 "block_size": 512, 00:15:45.954 "num_blocks": 65536, 00:15:45.954 "uuid": "8454fa85-42d0-11ef-96ac-773515fba644", 00:15:45.954 "assigned_rate_limits": { 00:15:45.954 "rw_ios_per_sec": 0, 00:15:45.954 "rw_mbytes_per_sec": 0, 00:15:45.954 "r_mbytes_per_sec": 0, 00:15:45.954 "w_mbytes_per_sec": 0 00:15:45.954 }, 00:15:45.954 "claimed": false, 00:15:45.954 "zoned": false, 00:15:45.954 "supported_io_types": { 00:15:45.954 "read": true, 00:15:45.954 "write": true, 00:15:45.954 "unmap": true, 00:15:45.954 "flush": true, 00:15:45.954 "reset": true, 00:15:45.954 "nvme_admin": false, 00:15:45.954 "nvme_io": false, 00:15:45.954 "nvme_io_md": false, 00:15:45.954 "write_zeroes": true, 00:15:45.954 "zcopy": true, 00:15:45.954 "get_zone_info": false, 00:15:45.954 "zone_management": false, 00:15:45.954 "zone_append": false, 00:15:45.954 "compare": false, 00:15:45.954 "compare_and_write": false, 00:15:45.954 "abort": true, 00:15:45.954 "seek_hole": false, 00:15:45.954 "seek_data": false, 00:15:45.954 "copy": true, 00:15:45.954 "nvme_iov_md": false 00:15:45.954 }, 00:15:45.954 "memory_domains": [ 00:15:45.954 { 00:15:45.954 "dma_device_id": "system", 00:15:45.954 "dma_device_type": 1 00:15:45.954 }, 00:15:45.954 { 00:15:45.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:45.954 "dma_device_type": 2 00:15:45.954 } 00:15:45.954 ], 00:15:45.954 "driver_specific": {} 00:15:45.954 } 00:15:45.954 ] 00:15:45.954 17:34:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:45.954 17:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:45.954 17:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:45.954 17:34:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:46.212 [2024-07-15 17:34:41.996722] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:46.212 [2024-07-15 17:34:41.996771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:46.212 [2024-07-15 17:34:41.996780] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:46.212 [2024-07-15 17:34:41.997318] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:46.212 [2024-07-15 17:34:41.997336] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:46.212 17:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:46.212 17:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:46.212 17:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:46.212 17:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:46.212 17:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:46.212 17:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:46.212 17:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:46.212 17:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:46.213 17:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:46.213 17:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:46.213 17:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.213 17:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.471 17:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:46.471 "name": "Existed_Raid", 00:15:46.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.471 "strip_size_kb": 0, 00:15:46.471 "state": "configuring", 00:15:46.471 "raid_level": "raid1", 00:15:46.471 "superblock": false, 00:15:46.471 "num_base_bdevs": 4, 00:15:46.471 "num_base_bdevs_discovered": 3, 00:15:46.471 "num_base_bdevs_operational": 4, 00:15:46.471 "base_bdevs_list": [ 00:15:46.471 { 00:15:46.471 "name": "BaseBdev1", 00:15:46.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.471 "is_configured": false, 00:15:46.471 "data_offset": 0, 00:15:46.471 "data_size": 0 00:15:46.471 }, 00:15:46.471 { 00:15:46.471 "name": "BaseBdev2", 00:15:46.471 "uuid": "8361720a-42d0-11ef-96ac-773515fba644", 00:15:46.471 "is_configured": true, 00:15:46.471 "data_offset": 0, 00:15:46.471 "data_size": 65536 00:15:46.471 }, 00:15:46.471 { 00:15:46.471 "name": "BaseBdev3", 00:15:46.471 "uuid": "83e19ee8-42d0-11ef-96ac-773515fba644", 00:15:46.471 "is_configured": true, 00:15:46.471 "data_offset": 0, 00:15:46.471 "data_size": 65536 00:15:46.471 }, 00:15:46.471 { 00:15:46.471 "name": "BaseBdev4", 00:15:46.471 "uuid": "8454fa85-42d0-11ef-96ac-773515fba644", 00:15:46.471 "is_configured": true, 00:15:46.471 "data_offset": 0, 00:15:46.471 "data_size": 65536 00:15:46.471 } 00:15:46.471 ] 00:15:46.471 }' 00:15:46.471 17:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:46.471 17:34:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.036 17:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:47.295 [2024-07-15 17:34:42.888732] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:47.295 17:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:47.295 17:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:47.295 17:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:47.295 17:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:47.295 17:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:47.295 17:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:47.295 17:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:47.295 17:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:47.295 17:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:47.295 17:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:47.295 17:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:47.295 17:34:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:47.556 17:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:47.556 "name": "Existed_Raid", 00:15:47.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.556 "strip_size_kb": 0, 00:15:47.556 "state": "configuring", 00:15:47.556 "raid_level": "raid1", 00:15:47.556 "superblock": false, 00:15:47.556 "num_base_bdevs": 4, 00:15:47.556 "num_base_bdevs_discovered": 2, 00:15:47.556 "num_base_bdevs_operational": 4, 00:15:47.556 "base_bdevs_list": [ 00:15:47.556 { 00:15:47.556 "name": "BaseBdev1", 00:15:47.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:47.556 "is_configured": false, 00:15:47.556 "data_offset": 0, 00:15:47.556 "data_size": 0 00:15:47.556 }, 00:15:47.556 { 00:15:47.556 "name": null, 00:15:47.556 "uuid": "8361720a-42d0-11ef-96ac-773515fba644", 00:15:47.556 "is_configured": false, 00:15:47.556 "data_offset": 0, 00:15:47.556 "data_size": 65536 00:15:47.556 }, 00:15:47.556 { 00:15:47.556 "name": "BaseBdev3", 00:15:47.556 "uuid": "83e19ee8-42d0-11ef-96ac-773515fba644", 00:15:47.556 "is_configured": true, 00:15:47.557 "data_offset": 0, 00:15:47.557 "data_size": 65536 00:15:47.557 }, 00:15:47.557 { 00:15:47.557 "name": "BaseBdev4", 00:15:47.557 "uuid": "8454fa85-42d0-11ef-96ac-773515fba644", 00:15:47.557 "is_configured": true, 00:15:47.557 "data_offset": 0, 00:15:47.557 "data_size": 65536 00:15:47.557 } 00:15:47.557 ] 00:15:47.557 }' 00:15:47.557 17:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:47.557 17:34:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.815 17:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:47.815 17:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.073 17:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:15:48.073 17:34:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:48.331 [2024-07-15 17:34:44.144887] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:48.331 BaseBdev1 00:15:48.331 17:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:15:48.331 17:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:48.331 17:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:48.331 17:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:48.331 17:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:48.331 17:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:48.331 17:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:48.897 17:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:48.897 [ 00:15:48.897 { 00:15:48.897 "name": "BaseBdev1", 00:15:48.897 "aliases": [ 00:15:48.897 "860da8c7-42d0-11ef-96ac-773515fba644" 00:15:48.897 ], 00:15:48.897 "product_name": "Malloc disk", 00:15:48.897 "block_size": 512, 00:15:48.897 "num_blocks": 65536, 00:15:48.897 "uuid": "860da8c7-42d0-11ef-96ac-773515fba644", 00:15:48.897 "assigned_rate_limits": { 00:15:48.897 "rw_ios_per_sec": 0, 00:15:48.897 "rw_mbytes_per_sec": 0, 00:15:48.897 "r_mbytes_per_sec": 0, 00:15:48.897 "w_mbytes_per_sec": 0 00:15:48.897 }, 00:15:48.897 "claimed": true, 00:15:48.897 "claim_type": "exclusive_write", 00:15:48.897 "zoned": false, 00:15:48.897 "supported_io_types": { 00:15:48.897 "read": true, 00:15:48.897 "write": true, 00:15:48.897 "unmap": true, 00:15:48.897 "flush": true, 00:15:48.897 "reset": true, 00:15:48.897 "nvme_admin": false, 00:15:48.897 "nvme_io": false, 00:15:48.897 "nvme_io_md": false, 00:15:48.897 "write_zeroes": true, 00:15:48.897 "zcopy": true, 00:15:48.897 "get_zone_info": false, 00:15:48.897 "zone_management": false, 00:15:48.897 "zone_append": false, 00:15:48.897 "compare": false, 00:15:48.897 "compare_and_write": false, 00:15:48.897 "abort": true, 00:15:48.897 "seek_hole": false, 00:15:48.897 "seek_data": false, 00:15:48.897 "copy": true, 00:15:48.897 "nvme_iov_md": false 00:15:48.897 }, 00:15:48.897 "memory_domains": [ 00:15:48.897 { 00:15:48.897 "dma_device_id": "system", 00:15:48.897 "dma_device_type": 1 00:15:48.897 }, 00:15:48.897 { 00:15:48.897 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.897 "dma_device_type": 2 00:15:48.897 } 00:15:48.897 ], 00:15:48.897 "driver_specific": {} 00:15:48.897 } 00:15:48.897 ] 00:15:48.897 17:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:48.897 17:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:48.897 17:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:48.897 17:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:48.897 17:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:48.897 17:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:48.897 17:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:48.897 17:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:48.898 17:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:48.898 17:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:48.898 17:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:48.898 17:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:48.898 17:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.156 17:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:49.156 "name": "Existed_Raid", 00:15:49.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.156 "strip_size_kb": 0, 00:15:49.156 "state": "configuring", 00:15:49.156 "raid_level": "raid1", 00:15:49.156 "superblock": false, 00:15:49.156 "num_base_bdevs": 4, 00:15:49.156 "num_base_bdevs_discovered": 3, 00:15:49.156 "num_base_bdevs_operational": 4, 00:15:49.156 "base_bdevs_list": [ 00:15:49.156 { 00:15:49.156 "name": "BaseBdev1", 00:15:49.156 "uuid": "860da8c7-42d0-11ef-96ac-773515fba644", 00:15:49.156 "is_configured": true, 00:15:49.156 "data_offset": 0, 00:15:49.156 "data_size": 65536 00:15:49.156 }, 00:15:49.156 { 00:15:49.156 "name": null, 00:15:49.156 "uuid": "8361720a-42d0-11ef-96ac-773515fba644", 00:15:49.156 "is_configured": false, 00:15:49.156 "data_offset": 0, 00:15:49.156 "data_size": 65536 00:15:49.156 }, 00:15:49.156 { 00:15:49.156 "name": "BaseBdev3", 00:15:49.156 "uuid": "83e19ee8-42d0-11ef-96ac-773515fba644", 00:15:49.156 "is_configured": true, 00:15:49.156 "data_offset": 0, 00:15:49.156 "data_size": 65536 00:15:49.156 }, 00:15:49.156 { 00:15:49.156 "name": "BaseBdev4", 00:15:49.156 "uuid": "8454fa85-42d0-11ef-96ac-773515fba644", 00:15:49.156 "is_configured": true, 00:15:49.156 "data_offset": 0, 00:15:49.156 "data_size": 65536 00:15:49.156 } 00:15:49.156 ] 00:15:49.156 }' 00:15:49.156 17:34:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:49.156 17:34:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.723 17:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.723 17:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:49.981 17:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:15:49.981 17:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:15:50.239 [2024-07-15 17:34:45.880788] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:50.239 17:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:50.239 17:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:50.239 17:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:50.239 17:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:50.239 17:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:50.239 17:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:50.239 17:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:50.239 17:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:50.239 17:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:50.239 17:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:50.239 17:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.239 17:34:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.497 17:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:50.497 "name": "Existed_Raid", 00:15:50.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.497 "strip_size_kb": 0, 00:15:50.497 "state": "configuring", 00:15:50.497 "raid_level": "raid1", 00:15:50.497 "superblock": false, 00:15:50.497 "num_base_bdevs": 4, 00:15:50.497 "num_base_bdevs_discovered": 2, 00:15:50.497 "num_base_bdevs_operational": 4, 00:15:50.497 "base_bdevs_list": [ 00:15:50.497 { 00:15:50.497 "name": "BaseBdev1", 00:15:50.497 "uuid": "860da8c7-42d0-11ef-96ac-773515fba644", 00:15:50.497 "is_configured": true, 00:15:50.497 "data_offset": 0, 00:15:50.497 "data_size": 65536 00:15:50.497 }, 00:15:50.498 { 00:15:50.498 "name": null, 00:15:50.498 "uuid": "8361720a-42d0-11ef-96ac-773515fba644", 00:15:50.498 "is_configured": false, 00:15:50.498 "data_offset": 0, 00:15:50.498 "data_size": 65536 00:15:50.498 }, 00:15:50.498 { 00:15:50.498 "name": null, 00:15:50.498 "uuid": "83e19ee8-42d0-11ef-96ac-773515fba644", 00:15:50.498 "is_configured": false, 00:15:50.498 "data_offset": 0, 00:15:50.498 "data_size": 65536 00:15:50.498 }, 00:15:50.498 { 00:15:50.498 "name": "BaseBdev4", 00:15:50.498 "uuid": "8454fa85-42d0-11ef-96ac-773515fba644", 00:15:50.498 "is_configured": true, 00:15:50.498 "data_offset": 0, 00:15:50.498 "data_size": 65536 00:15:50.498 } 00:15:50.498 ] 00:15:50.498 }' 00:15:50.498 17:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:50.498 17:34:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:50.756 17:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.756 17:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:51.015 17:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:15:51.015 17:34:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:51.275 [2024-07-15 17:34:47.016833] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:51.275 17:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:51.275 17:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:51.275 17:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:51.275 17:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:51.275 17:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:51.275 17:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:51.275 17:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:51.275 17:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:51.275 17:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:51.275 17:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:51.275 17:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.275 17:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.535 17:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:51.535 "name": "Existed_Raid", 00:15:51.535 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.535 "strip_size_kb": 0, 00:15:51.535 "state": "configuring", 00:15:51.535 "raid_level": "raid1", 00:15:51.535 "superblock": false, 00:15:51.535 "num_base_bdevs": 4, 00:15:51.535 "num_base_bdevs_discovered": 3, 00:15:51.535 "num_base_bdevs_operational": 4, 00:15:51.535 "base_bdevs_list": [ 00:15:51.535 { 00:15:51.535 "name": "BaseBdev1", 00:15:51.535 "uuid": "860da8c7-42d0-11ef-96ac-773515fba644", 00:15:51.535 "is_configured": true, 00:15:51.535 "data_offset": 0, 00:15:51.535 "data_size": 65536 00:15:51.535 }, 00:15:51.535 { 00:15:51.535 "name": null, 00:15:51.535 "uuid": "8361720a-42d0-11ef-96ac-773515fba644", 00:15:51.535 "is_configured": false, 00:15:51.535 "data_offset": 0, 00:15:51.535 "data_size": 65536 00:15:51.535 }, 00:15:51.535 { 00:15:51.535 "name": "BaseBdev3", 00:15:51.535 "uuid": "83e19ee8-42d0-11ef-96ac-773515fba644", 00:15:51.535 "is_configured": true, 00:15:51.535 "data_offset": 0, 00:15:51.535 "data_size": 65536 00:15:51.535 }, 00:15:51.535 { 00:15:51.535 "name": "BaseBdev4", 00:15:51.535 "uuid": "8454fa85-42d0-11ef-96ac-773515fba644", 00:15:51.535 "is_configured": true, 00:15:51.535 "data_offset": 0, 00:15:51.535 "data_size": 65536 00:15:51.535 } 00:15:51.535 ] 00:15:51.535 }' 00:15:51.535 17:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:51.535 17:34:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:51.794 17:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.794 17:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:52.360 17:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:15:52.360 17:34:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:52.360 [2024-07-15 17:34:48.144873] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:52.360 17:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:52.360 17:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:52.360 17:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:52.360 17:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:52.360 17:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:52.360 17:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:52.360 17:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:52.360 17:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:52.360 17:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:52.360 17:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:52.360 17:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.360 17:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.619 17:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:52.619 "name": "Existed_Raid", 00:15:52.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.619 "strip_size_kb": 0, 00:15:52.619 "state": "configuring", 00:15:52.619 "raid_level": "raid1", 00:15:52.619 "superblock": false, 00:15:52.619 "num_base_bdevs": 4, 00:15:52.619 "num_base_bdevs_discovered": 2, 00:15:52.619 "num_base_bdevs_operational": 4, 00:15:52.619 "base_bdevs_list": [ 00:15:52.619 { 00:15:52.619 "name": null, 00:15:52.619 "uuid": "860da8c7-42d0-11ef-96ac-773515fba644", 00:15:52.619 "is_configured": false, 00:15:52.619 "data_offset": 0, 00:15:52.619 "data_size": 65536 00:15:52.619 }, 00:15:52.619 { 00:15:52.619 "name": null, 00:15:52.619 "uuid": "8361720a-42d0-11ef-96ac-773515fba644", 00:15:52.619 "is_configured": false, 00:15:52.619 "data_offset": 0, 00:15:52.619 "data_size": 65536 00:15:52.619 }, 00:15:52.619 { 00:15:52.619 "name": "BaseBdev3", 00:15:52.619 "uuid": "83e19ee8-42d0-11ef-96ac-773515fba644", 00:15:52.619 "is_configured": true, 00:15:52.619 "data_offset": 0, 00:15:52.619 "data_size": 65536 00:15:52.619 }, 00:15:52.619 { 00:15:52.619 "name": "BaseBdev4", 00:15:52.619 "uuid": "8454fa85-42d0-11ef-96ac-773515fba644", 00:15:52.619 "is_configured": true, 00:15:52.619 "data_offset": 0, 00:15:52.619 "data_size": 65536 00:15:52.619 } 00:15:52.619 ] 00:15:52.619 }' 00:15:52.619 17:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:52.619 17:34:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:52.876 17:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:52.876 17:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:53.133 17:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:15:53.133 17:34:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:53.390 [2024-07-15 17:34:49.118653] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:53.390 17:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:53.390 17:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:53.390 17:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:53.390 17:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:53.390 17:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:53.390 17:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:53.390 17:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:53.390 17:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:53.390 17:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:53.390 17:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:53.390 17:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.390 17:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.647 17:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:53.647 "name": "Existed_Raid", 00:15:53.647 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.647 "strip_size_kb": 0, 00:15:53.647 "state": "configuring", 00:15:53.647 "raid_level": "raid1", 00:15:53.647 "superblock": false, 00:15:53.647 "num_base_bdevs": 4, 00:15:53.647 "num_base_bdevs_discovered": 3, 00:15:53.647 "num_base_bdevs_operational": 4, 00:15:53.647 "base_bdevs_list": [ 00:15:53.647 { 00:15:53.647 "name": null, 00:15:53.647 "uuid": "860da8c7-42d0-11ef-96ac-773515fba644", 00:15:53.647 "is_configured": false, 00:15:53.647 "data_offset": 0, 00:15:53.647 "data_size": 65536 00:15:53.647 }, 00:15:53.647 { 00:15:53.647 "name": "BaseBdev2", 00:15:53.647 "uuid": "8361720a-42d0-11ef-96ac-773515fba644", 00:15:53.647 "is_configured": true, 00:15:53.647 "data_offset": 0, 00:15:53.647 "data_size": 65536 00:15:53.647 }, 00:15:53.647 { 00:15:53.647 "name": "BaseBdev3", 00:15:53.647 "uuid": "83e19ee8-42d0-11ef-96ac-773515fba644", 00:15:53.647 "is_configured": true, 00:15:53.647 "data_offset": 0, 00:15:53.647 "data_size": 65536 00:15:53.647 }, 00:15:53.647 { 00:15:53.647 "name": "BaseBdev4", 00:15:53.647 "uuid": "8454fa85-42d0-11ef-96ac-773515fba644", 00:15:53.647 "is_configured": true, 00:15:53.647 "data_offset": 0, 00:15:53.647 "data_size": 65536 00:15:53.647 } 00:15:53.647 ] 00:15:53.647 }' 00:15:53.647 17:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:53.647 17:34:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:54.212 17:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.212 17:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:54.212 17:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:15:54.212 17:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.212 17:34:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:54.472 17:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 860da8c7-42d0-11ef-96ac-773515fba644 00:15:54.729 [2024-07-15 17:34:50.442805] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:54.729 [2024-07-15 17:34:50.442833] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xca563234f00 00:15:54.729 [2024-07-15 17:34:50.442837] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:54.729 [2024-07-15 17:34:50.442861] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xca563297e20 00:15:54.729 [2024-07-15 17:34:50.442930] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xca563234f00 00:15:54.729 [2024-07-15 17:34:50.442935] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xca563234f00 00:15:54.729 [2024-07-15 17:34:50.442969] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:54.729 NewBaseBdev 00:15:54.729 17:34:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:15:54.729 17:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:15:54.729 17:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:54.729 17:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:54.729 17:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:54.729 17:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:54.729 17:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:54.987 17:34:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:55.245 [ 00:15:55.245 { 00:15:55.245 "name": "NewBaseBdev", 00:15:55.245 "aliases": [ 00:15:55.245 "860da8c7-42d0-11ef-96ac-773515fba644" 00:15:55.245 ], 00:15:55.245 "product_name": "Malloc disk", 00:15:55.245 "block_size": 512, 00:15:55.245 "num_blocks": 65536, 00:15:55.245 "uuid": "860da8c7-42d0-11ef-96ac-773515fba644", 00:15:55.245 "assigned_rate_limits": { 00:15:55.245 "rw_ios_per_sec": 0, 00:15:55.245 "rw_mbytes_per_sec": 0, 00:15:55.245 "r_mbytes_per_sec": 0, 00:15:55.245 "w_mbytes_per_sec": 0 00:15:55.245 }, 00:15:55.245 "claimed": true, 00:15:55.245 "claim_type": "exclusive_write", 00:15:55.245 "zoned": false, 00:15:55.245 "supported_io_types": { 00:15:55.245 "read": true, 00:15:55.245 "write": true, 00:15:55.245 "unmap": true, 00:15:55.245 "flush": true, 00:15:55.245 "reset": true, 00:15:55.245 "nvme_admin": false, 00:15:55.245 "nvme_io": false, 00:15:55.245 "nvme_io_md": false, 00:15:55.245 "write_zeroes": true, 00:15:55.245 "zcopy": true, 00:15:55.245 "get_zone_info": false, 00:15:55.245 "zone_management": false, 00:15:55.245 "zone_append": false, 00:15:55.245 "compare": false, 00:15:55.245 "compare_and_write": false, 00:15:55.245 "abort": true, 00:15:55.245 "seek_hole": false, 00:15:55.245 "seek_data": false, 00:15:55.245 "copy": true, 00:15:55.245 "nvme_iov_md": false 00:15:55.245 }, 00:15:55.245 "memory_domains": [ 00:15:55.245 { 00:15:55.245 "dma_device_id": "system", 00:15:55.245 "dma_device_type": 1 00:15:55.245 }, 00:15:55.245 { 00:15:55.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.245 "dma_device_type": 2 00:15:55.245 } 00:15:55.245 ], 00:15:55.245 "driver_specific": {} 00:15:55.245 } 00:15:55.245 ] 00:15:55.245 17:34:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:55.245 17:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:55.245 17:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:55.245 17:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:55.245 17:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:55.245 17:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:55.245 17:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:55.245 17:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:55.245 17:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:55.245 17:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:55.245 17:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:55.246 17:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.246 17:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.503 17:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:55.503 "name": "Existed_Raid", 00:15:55.503 "uuid": "89ceab95-42d0-11ef-96ac-773515fba644", 00:15:55.503 "strip_size_kb": 0, 00:15:55.503 "state": "online", 00:15:55.503 "raid_level": "raid1", 00:15:55.503 "superblock": false, 00:15:55.503 "num_base_bdevs": 4, 00:15:55.503 "num_base_bdevs_discovered": 4, 00:15:55.503 "num_base_bdevs_operational": 4, 00:15:55.503 "base_bdevs_list": [ 00:15:55.503 { 00:15:55.503 "name": "NewBaseBdev", 00:15:55.503 "uuid": "860da8c7-42d0-11ef-96ac-773515fba644", 00:15:55.503 "is_configured": true, 00:15:55.503 "data_offset": 0, 00:15:55.503 "data_size": 65536 00:15:55.503 }, 00:15:55.503 { 00:15:55.503 "name": "BaseBdev2", 00:15:55.503 "uuid": "8361720a-42d0-11ef-96ac-773515fba644", 00:15:55.503 "is_configured": true, 00:15:55.503 "data_offset": 0, 00:15:55.503 "data_size": 65536 00:15:55.503 }, 00:15:55.503 { 00:15:55.503 "name": "BaseBdev3", 00:15:55.503 "uuid": "83e19ee8-42d0-11ef-96ac-773515fba644", 00:15:55.503 "is_configured": true, 00:15:55.503 "data_offset": 0, 00:15:55.503 "data_size": 65536 00:15:55.503 }, 00:15:55.503 { 00:15:55.503 "name": "BaseBdev4", 00:15:55.503 "uuid": "8454fa85-42d0-11ef-96ac-773515fba644", 00:15:55.503 "is_configured": true, 00:15:55.503 "data_offset": 0, 00:15:55.503 "data_size": 65536 00:15:55.503 } 00:15:55.503 ] 00:15:55.503 }' 00:15:55.503 17:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:55.503 17:34:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:56.073 17:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:15:56.073 17:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:56.073 17:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:56.073 17:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:56.073 17:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:56.073 17:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:56.073 17:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:56.073 17:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:56.074 [2024-07-15 17:34:51.866736] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:56.074 17:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:56.074 "name": "Existed_Raid", 00:15:56.074 "aliases": [ 00:15:56.074 "89ceab95-42d0-11ef-96ac-773515fba644" 00:15:56.074 ], 00:15:56.074 "product_name": "Raid Volume", 00:15:56.074 "block_size": 512, 00:15:56.074 "num_blocks": 65536, 00:15:56.074 "uuid": "89ceab95-42d0-11ef-96ac-773515fba644", 00:15:56.074 "assigned_rate_limits": { 00:15:56.074 "rw_ios_per_sec": 0, 00:15:56.074 "rw_mbytes_per_sec": 0, 00:15:56.074 "r_mbytes_per_sec": 0, 00:15:56.074 "w_mbytes_per_sec": 0 00:15:56.074 }, 00:15:56.074 "claimed": false, 00:15:56.074 "zoned": false, 00:15:56.074 "supported_io_types": { 00:15:56.074 "read": true, 00:15:56.074 "write": true, 00:15:56.074 "unmap": false, 00:15:56.074 "flush": false, 00:15:56.074 "reset": true, 00:15:56.074 "nvme_admin": false, 00:15:56.074 "nvme_io": false, 00:15:56.074 "nvme_io_md": false, 00:15:56.074 "write_zeroes": true, 00:15:56.074 "zcopy": false, 00:15:56.074 "get_zone_info": false, 00:15:56.074 "zone_management": false, 00:15:56.074 "zone_append": false, 00:15:56.074 "compare": false, 00:15:56.074 "compare_and_write": false, 00:15:56.074 "abort": false, 00:15:56.074 "seek_hole": false, 00:15:56.074 "seek_data": false, 00:15:56.074 "copy": false, 00:15:56.074 "nvme_iov_md": false 00:15:56.074 }, 00:15:56.074 "memory_domains": [ 00:15:56.074 { 00:15:56.074 "dma_device_id": "system", 00:15:56.074 "dma_device_type": 1 00:15:56.074 }, 00:15:56.074 { 00:15:56.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.074 "dma_device_type": 2 00:15:56.074 }, 00:15:56.074 { 00:15:56.074 "dma_device_id": "system", 00:15:56.074 "dma_device_type": 1 00:15:56.074 }, 00:15:56.074 { 00:15:56.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.074 "dma_device_type": 2 00:15:56.074 }, 00:15:56.074 { 00:15:56.074 "dma_device_id": "system", 00:15:56.074 "dma_device_type": 1 00:15:56.074 }, 00:15:56.074 { 00:15:56.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.074 "dma_device_type": 2 00:15:56.074 }, 00:15:56.074 { 00:15:56.074 "dma_device_id": "system", 00:15:56.074 "dma_device_type": 1 00:15:56.074 }, 00:15:56.074 { 00:15:56.074 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.074 "dma_device_type": 2 00:15:56.074 } 00:15:56.074 ], 00:15:56.074 "driver_specific": { 00:15:56.074 "raid": { 00:15:56.074 "uuid": "89ceab95-42d0-11ef-96ac-773515fba644", 00:15:56.074 "strip_size_kb": 0, 00:15:56.074 "state": "online", 00:15:56.074 "raid_level": "raid1", 00:15:56.074 "superblock": false, 00:15:56.074 "num_base_bdevs": 4, 00:15:56.074 "num_base_bdevs_discovered": 4, 00:15:56.074 "num_base_bdevs_operational": 4, 00:15:56.074 "base_bdevs_list": [ 00:15:56.074 { 00:15:56.074 "name": "NewBaseBdev", 00:15:56.074 "uuid": "860da8c7-42d0-11ef-96ac-773515fba644", 00:15:56.074 "is_configured": true, 00:15:56.074 "data_offset": 0, 00:15:56.074 "data_size": 65536 00:15:56.074 }, 00:15:56.074 { 00:15:56.074 "name": "BaseBdev2", 00:15:56.074 "uuid": "8361720a-42d0-11ef-96ac-773515fba644", 00:15:56.074 "is_configured": true, 00:15:56.074 "data_offset": 0, 00:15:56.074 "data_size": 65536 00:15:56.074 }, 00:15:56.074 { 00:15:56.074 "name": "BaseBdev3", 00:15:56.074 "uuid": "83e19ee8-42d0-11ef-96ac-773515fba644", 00:15:56.074 "is_configured": true, 00:15:56.074 "data_offset": 0, 00:15:56.074 "data_size": 65536 00:15:56.074 }, 00:15:56.074 { 00:15:56.074 "name": "BaseBdev4", 00:15:56.074 "uuid": "8454fa85-42d0-11ef-96ac-773515fba644", 00:15:56.074 "is_configured": true, 00:15:56.074 "data_offset": 0, 00:15:56.074 "data_size": 65536 00:15:56.074 } 00:15:56.074 ] 00:15:56.074 } 00:15:56.074 } 00:15:56.074 }' 00:15:56.074 17:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:56.074 17:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:15:56.074 BaseBdev2 00:15:56.074 BaseBdev3 00:15:56.074 BaseBdev4' 00:15:56.074 17:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:56.074 17:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:56.074 17:34:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:56.333 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:56.333 "name": "NewBaseBdev", 00:15:56.333 "aliases": [ 00:15:56.333 "860da8c7-42d0-11ef-96ac-773515fba644" 00:15:56.333 ], 00:15:56.333 "product_name": "Malloc disk", 00:15:56.333 "block_size": 512, 00:15:56.333 "num_blocks": 65536, 00:15:56.333 "uuid": "860da8c7-42d0-11ef-96ac-773515fba644", 00:15:56.333 "assigned_rate_limits": { 00:15:56.333 "rw_ios_per_sec": 0, 00:15:56.333 "rw_mbytes_per_sec": 0, 00:15:56.333 "r_mbytes_per_sec": 0, 00:15:56.333 "w_mbytes_per_sec": 0 00:15:56.333 }, 00:15:56.333 "claimed": true, 00:15:56.333 "claim_type": "exclusive_write", 00:15:56.333 "zoned": false, 00:15:56.333 "supported_io_types": { 00:15:56.333 "read": true, 00:15:56.333 "write": true, 00:15:56.333 "unmap": true, 00:15:56.333 "flush": true, 00:15:56.333 "reset": true, 00:15:56.333 "nvme_admin": false, 00:15:56.333 "nvme_io": false, 00:15:56.333 "nvme_io_md": false, 00:15:56.333 "write_zeroes": true, 00:15:56.333 "zcopy": true, 00:15:56.333 "get_zone_info": false, 00:15:56.333 "zone_management": false, 00:15:56.333 "zone_append": false, 00:15:56.333 "compare": false, 00:15:56.333 "compare_and_write": false, 00:15:56.333 "abort": true, 00:15:56.333 "seek_hole": false, 00:15:56.333 "seek_data": false, 00:15:56.333 "copy": true, 00:15:56.333 "nvme_iov_md": false 00:15:56.333 }, 00:15:56.333 "memory_domains": [ 00:15:56.333 { 00:15:56.333 "dma_device_id": "system", 00:15:56.333 "dma_device_type": 1 00:15:56.333 }, 00:15:56.333 { 00:15:56.333 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.333 "dma_device_type": 2 00:15:56.333 } 00:15:56.333 ], 00:15:56.333 "driver_specific": {} 00:15:56.333 }' 00:15:56.333 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:56.333 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:56.333 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:56.333 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:56.333 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:56.333 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:56.333 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:56.333 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:56.592 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:56.592 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:56.592 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:56.592 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:56.592 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:56.592 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:56.592 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:56.592 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:56.592 "name": "BaseBdev2", 00:15:56.592 "aliases": [ 00:15:56.592 "8361720a-42d0-11ef-96ac-773515fba644" 00:15:56.592 ], 00:15:56.592 "product_name": "Malloc disk", 00:15:56.592 "block_size": 512, 00:15:56.592 "num_blocks": 65536, 00:15:56.592 "uuid": "8361720a-42d0-11ef-96ac-773515fba644", 00:15:56.592 "assigned_rate_limits": { 00:15:56.592 "rw_ios_per_sec": 0, 00:15:56.592 "rw_mbytes_per_sec": 0, 00:15:56.592 "r_mbytes_per_sec": 0, 00:15:56.592 "w_mbytes_per_sec": 0 00:15:56.592 }, 00:15:56.592 "claimed": true, 00:15:56.592 "claim_type": "exclusive_write", 00:15:56.592 "zoned": false, 00:15:56.592 "supported_io_types": { 00:15:56.592 "read": true, 00:15:56.592 "write": true, 00:15:56.592 "unmap": true, 00:15:56.592 "flush": true, 00:15:56.592 "reset": true, 00:15:56.592 "nvme_admin": false, 00:15:56.592 "nvme_io": false, 00:15:56.592 "nvme_io_md": false, 00:15:56.592 "write_zeroes": true, 00:15:56.592 "zcopy": true, 00:15:56.592 "get_zone_info": false, 00:15:56.592 "zone_management": false, 00:15:56.592 "zone_append": false, 00:15:56.592 "compare": false, 00:15:56.592 "compare_and_write": false, 00:15:56.592 "abort": true, 00:15:56.592 "seek_hole": false, 00:15:56.592 "seek_data": false, 00:15:56.592 "copy": true, 00:15:56.592 "nvme_iov_md": false 00:15:56.592 }, 00:15:56.592 "memory_domains": [ 00:15:56.592 { 00:15:56.592 "dma_device_id": "system", 00:15:56.592 "dma_device_type": 1 00:15:56.592 }, 00:15:56.592 { 00:15:56.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.592 "dma_device_type": 2 00:15:56.592 } 00:15:56.592 ], 00:15:56.592 "driver_specific": {} 00:15:56.592 }' 00:15:56.592 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:56.592 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:56.592 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:56.592 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:56.849 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:56.849 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:56.849 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:56.849 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:56.849 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:56.849 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:56.849 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:56.849 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:56.849 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:56.849 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:56.849 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:57.107 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:57.107 "name": "BaseBdev3", 00:15:57.107 "aliases": [ 00:15:57.107 "83e19ee8-42d0-11ef-96ac-773515fba644" 00:15:57.107 ], 00:15:57.107 "product_name": "Malloc disk", 00:15:57.107 "block_size": 512, 00:15:57.107 "num_blocks": 65536, 00:15:57.107 "uuid": "83e19ee8-42d0-11ef-96ac-773515fba644", 00:15:57.107 "assigned_rate_limits": { 00:15:57.107 "rw_ios_per_sec": 0, 00:15:57.107 "rw_mbytes_per_sec": 0, 00:15:57.107 "r_mbytes_per_sec": 0, 00:15:57.107 "w_mbytes_per_sec": 0 00:15:57.107 }, 00:15:57.107 "claimed": true, 00:15:57.107 "claim_type": "exclusive_write", 00:15:57.107 "zoned": false, 00:15:57.107 "supported_io_types": { 00:15:57.107 "read": true, 00:15:57.107 "write": true, 00:15:57.107 "unmap": true, 00:15:57.107 "flush": true, 00:15:57.107 "reset": true, 00:15:57.107 "nvme_admin": false, 00:15:57.107 "nvme_io": false, 00:15:57.107 "nvme_io_md": false, 00:15:57.107 "write_zeroes": true, 00:15:57.107 "zcopy": true, 00:15:57.107 "get_zone_info": false, 00:15:57.107 "zone_management": false, 00:15:57.107 "zone_append": false, 00:15:57.107 "compare": false, 00:15:57.107 "compare_and_write": false, 00:15:57.107 "abort": true, 00:15:57.107 "seek_hole": false, 00:15:57.107 "seek_data": false, 00:15:57.107 "copy": true, 00:15:57.107 "nvme_iov_md": false 00:15:57.107 }, 00:15:57.107 "memory_domains": [ 00:15:57.107 { 00:15:57.107 "dma_device_id": "system", 00:15:57.107 "dma_device_type": 1 00:15:57.107 }, 00:15:57.107 { 00:15:57.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.107 "dma_device_type": 2 00:15:57.107 } 00:15:57.107 ], 00:15:57.107 "driver_specific": {} 00:15:57.107 }' 00:15:57.107 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:57.107 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:57.107 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:57.107 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:57.107 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:57.108 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:57.108 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.108 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.108 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:57.108 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:57.108 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:57.108 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:57.108 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:57.108 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:57.108 17:34:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:57.366 17:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:57.366 "name": "BaseBdev4", 00:15:57.366 "aliases": [ 00:15:57.366 "8454fa85-42d0-11ef-96ac-773515fba644" 00:15:57.366 ], 00:15:57.366 "product_name": "Malloc disk", 00:15:57.366 "block_size": 512, 00:15:57.366 "num_blocks": 65536, 00:15:57.366 "uuid": "8454fa85-42d0-11ef-96ac-773515fba644", 00:15:57.366 "assigned_rate_limits": { 00:15:57.366 "rw_ios_per_sec": 0, 00:15:57.366 "rw_mbytes_per_sec": 0, 00:15:57.366 "r_mbytes_per_sec": 0, 00:15:57.366 "w_mbytes_per_sec": 0 00:15:57.366 }, 00:15:57.366 "claimed": true, 00:15:57.366 "claim_type": "exclusive_write", 00:15:57.366 "zoned": false, 00:15:57.366 "supported_io_types": { 00:15:57.366 "read": true, 00:15:57.366 "write": true, 00:15:57.366 "unmap": true, 00:15:57.366 "flush": true, 00:15:57.366 "reset": true, 00:15:57.366 "nvme_admin": false, 00:15:57.366 "nvme_io": false, 00:15:57.366 "nvme_io_md": false, 00:15:57.366 "write_zeroes": true, 00:15:57.366 "zcopy": true, 00:15:57.366 "get_zone_info": false, 00:15:57.366 "zone_management": false, 00:15:57.366 "zone_append": false, 00:15:57.366 "compare": false, 00:15:57.366 "compare_and_write": false, 00:15:57.366 "abort": true, 00:15:57.366 "seek_hole": false, 00:15:57.366 "seek_data": false, 00:15:57.366 "copy": true, 00:15:57.366 "nvme_iov_md": false 00:15:57.366 }, 00:15:57.366 "memory_domains": [ 00:15:57.366 { 00:15:57.366 "dma_device_id": "system", 00:15:57.366 "dma_device_type": 1 00:15:57.366 }, 00:15:57.366 { 00:15:57.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.366 "dma_device_type": 2 00:15:57.366 } 00:15:57.366 ], 00:15:57.366 "driver_specific": {} 00:15:57.366 }' 00:15:57.366 17:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:57.366 17:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:57.366 17:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:57.366 17:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:57.366 17:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:57.366 17:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:57.366 17:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.366 17:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.366 17:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:57.366 17:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:57.366 17:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:57.366 17:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:57.366 17:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:57.644 [2024-07-15 17:34:53.338738] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:57.644 [2024-07-15 17:34:53.338766] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:57.644 [2024-07-15 17:34:53.338787] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:57.644 [2024-07-15 17:34:53.338852] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:57.644 [2024-07-15 17:34:53.338857] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xca563234f00 name Existed_Raid, state offline 00:15:57.644 17:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 62991 00:15:57.644 17:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 62991 ']' 00:15:57.644 17:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 62991 00:15:57.644 17:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:15:57.644 17:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:15:57.644 17:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 62991 00:15:57.644 17:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:15:57.644 17:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:15:57.644 17:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:15:57.644 killing process with pid 62991 00:15:57.644 17:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62991' 00:15:57.644 17:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 62991 00:15:57.644 [2024-07-15 17:34:53.365600] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:57.644 17:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 62991 00:15:57.644 [2024-07-15 17:34:53.388480] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:57.901 00:15:57.901 real 0m27.268s 00:15:57.901 user 0m49.887s 00:15:57.901 sys 0m3.767s 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:57.901 ************************************ 00:15:57.901 END TEST raid_state_function_test 00:15:57.901 ************************************ 00:15:57.901 17:34:53 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:57.901 17:34:53 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:15:57.901 17:34:53 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:57.901 17:34:53 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:57.901 17:34:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:57.901 ************************************ 00:15:57.901 START TEST raid_state_function_test_sb 00:15:57.901 ************************************ 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 true 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=63810 00:15:57.901 Process raid pid: 63810 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 63810' 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 63810 /var/tmp/spdk-raid.sock 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 63810 ']' 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:57.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:57.901 17:34:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.901 [2024-07-15 17:34:53.626182] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:15:57.901 [2024-07-15 17:34:53.626454] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:58.466 EAL: TSC is not safe to use in SMP mode 00:15:58.466 EAL: TSC is not invariant 00:15:58.466 [2024-07-15 17:34:54.171920] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:58.466 [2024-07-15 17:34:54.253224] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:58.466 [2024-07-15 17:34:54.255316] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.466 [2024-07-15 17:34:54.256078] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:58.466 [2024-07-15 17:34:54.256093] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:59.031 17:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:59.031 17:34:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:15:59.031 17:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:59.031 [2024-07-15 17:34:54.827383] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:59.031 [2024-07-15 17:34:54.827434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:59.031 [2024-07-15 17:34:54.827440] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:59.031 [2024-07-15 17:34:54.827449] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:59.031 [2024-07-15 17:34:54.827453] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:59.031 [2024-07-15 17:34:54.827460] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:59.031 [2024-07-15 17:34:54.827464] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:59.031 [2024-07-15 17:34:54.827471] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:59.031 17:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:59.031 17:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:59.031 17:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:59.031 17:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:59.031 17:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:59.031 17:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:59.031 17:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:59.031 17:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:59.031 17:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:59.031 17:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:59.031 17:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.031 17:34:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.595 17:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:59.596 "name": "Existed_Raid", 00:15:59.596 "uuid": "8c6bb26a-42d0-11ef-96ac-773515fba644", 00:15:59.596 "strip_size_kb": 0, 00:15:59.596 "state": "configuring", 00:15:59.596 "raid_level": "raid1", 00:15:59.596 "superblock": true, 00:15:59.596 "num_base_bdevs": 4, 00:15:59.596 "num_base_bdevs_discovered": 0, 00:15:59.596 "num_base_bdevs_operational": 4, 00:15:59.596 "base_bdevs_list": [ 00:15:59.596 { 00:15:59.596 "name": "BaseBdev1", 00:15:59.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.596 "is_configured": false, 00:15:59.596 "data_offset": 0, 00:15:59.596 "data_size": 0 00:15:59.596 }, 00:15:59.596 { 00:15:59.596 "name": "BaseBdev2", 00:15:59.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.596 "is_configured": false, 00:15:59.596 "data_offset": 0, 00:15:59.596 "data_size": 0 00:15:59.596 }, 00:15:59.596 { 00:15:59.596 "name": "BaseBdev3", 00:15:59.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.596 "is_configured": false, 00:15:59.596 "data_offset": 0, 00:15:59.596 "data_size": 0 00:15:59.596 }, 00:15:59.596 { 00:15:59.596 "name": "BaseBdev4", 00:15:59.596 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.596 "is_configured": false, 00:15:59.596 "data_offset": 0, 00:15:59.596 "data_size": 0 00:15:59.596 } 00:15:59.596 ] 00:15:59.596 }' 00:15:59.596 17:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:59.596 17:34:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.854 17:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:59.854 [2024-07-15 17:34:55.647363] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:59.854 [2024-07-15 17:34:55.647391] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10e52f634500 name Existed_Raid, state configuring 00:15:59.854 17:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:00.112 [2024-07-15 17:34:55.871379] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:00.112 [2024-07-15 17:34:55.871439] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:00.112 [2024-07-15 17:34:55.871445] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:00.112 [2024-07-15 17:34:55.871454] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:00.112 [2024-07-15 17:34:55.871458] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:00.112 [2024-07-15 17:34:55.871465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:00.112 [2024-07-15 17:34:55.871468] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:00.112 [2024-07-15 17:34:55.871475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:00.112 17:34:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:00.370 [2024-07-15 17:34:56.104381] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:00.370 BaseBdev1 00:16:00.370 17:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:00.371 17:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:00.371 17:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:00.371 17:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:00.371 17:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:00.371 17:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:00.371 17:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:00.627 17:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:00.885 [ 00:16:00.885 { 00:16:00.885 "name": "BaseBdev1", 00:16:00.885 "aliases": [ 00:16:00.885 "8d2e6671-42d0-11ef-96ac-773515fba644" 00:16:00.885 ], 00:16:00.885 "product_name": "Malloc disk", 00:16:00.885 "block_size": 512, 00:16:00.885 "num_blocks": 65536, 00:16:00.885 "uuid": "8d2e6671-42d0-11ef-96ac-773515fba644", 00:16:00.885 "assigned_rate_limits": { 00:16:00.885 "rw_ios_per_sec": 0, 00:16:00.885 "rw_mbytes_per_sec": 0, 00:16:00.885 "r_mbytes_per_sec": 0, 00:16:00.885 "w_mbytes_per_sec": 0 00:16:00.885 }, 00:16:00.885 "claimed": true, 00:16:00.885 "claim_type": "exclusive_write", 00:16:00.885 "zoned": false, 00:16:00.885 "supported_io_types": { 00:16:00.885 "read": true, 00:16:00.885 "write": true, 00:16:00.885 "unmap": true, 00:16:00.885 "flush": true, 00:16:00.885 "reset": true, 00:16:00.885 "nvme_admin": false, 00:16:00.885 "nvme_io": false, 00:16:00.885 "nvme_io_md": false, 00:16:00.885 "write_zeroes": true, 00:16:00.885 "zcopy": true, 00:16:00.885 "get_zone_info": false, 00:16:00.885 "zone_management": false, 00:16:00.885 "zone_append": false, 00:16:00.885 "compare": false, 00:16:00.885 "compare_and_write": false, 00:16:00.885 "abort": true, 00:16:00.885 "seek_hole": false, 00:16:00.885 "seek_data": false, 00:16:00.885 "copy": true, 00:16:00.885 "nvme_iov_md": false 00:16:00.885 }, 00:16:00.885 "memory_domains": [ 00:16:00.885 { 00:16:00.885 "dma_device_id": "system", 00:16:00.885 "dma_device_type": 1 00:16:00.885 }, 00:16:00.885 { 00:16:00.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:00.885 "dma_device_type": 2 00:16:00.885 } 00:16:00.885 ], 00:16:00.885 "driver_specific": {} 00:16:00.885 } 00:16:00.885 ] 00:16:00.885 17:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:00.885 17:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:00.885 17:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:00.885 17:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:00.885 17:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:00.885 17:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:00.885 17:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:00.885 17:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:00.885 17:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:00.885 17:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:00.885 17:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:00.885 17:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.885 17:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:01.142 17:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:01.142 "name": "Existed_Raid", 00:16:01.142 "uuid": "8d0aff8e-42d0-11ef-96ac-773515fba644", 00:16:01.142 "strip_size_kb": 0, 00:16:01.142 "state": "configuring", 00:16:01.142 "raid_level": "raid1", 00:16:01.142 "superblock": true, 00:16:01.142 "num_base_bdevs": 4, 00:16:01.142 "num_base_bdevs_discovered": 1, 00:16:01.142 "num_base_bdevs_operational": 4, 00:16:01.142 "base_bdevs_list": [ 00:16:01.142 { 00:16:01.142 "name": "BaseBdev1", 00:16:01.142 "uuid": "8d2e6671-42d0-11ef-96ac-773515fba644", 00:16:01.142 "is_configured": true, 00:16:01.142 "data_offset": 2048, 00:16:01.142 "data_size": 63488 00:16:01.142 }, 00:16:01.142 { 00:16:01.142 "name": "BaseBdev2", 00:16:01.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.142 "is_configured": false, 00:16:01.142 "data_offset": 0, 00:16:01.142 "data_size": 0 00:16:01.142 }, 00:16:01.142 { 00:16:01.142 "name": "BaseBdev3", 00:16:01.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.142 "is_configured": false, 00:16:01.142 "data_offset": 0, 00:16:01.142 "data_size": 0 00:16:01.142 }, 00:16:01.142 { 00:16:01.142 "name": "BaseBdev4", 00:16:01.142 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.142 "is_configured": false, 00:16:01.142 "data_offset": 0, 00:16:01.142 "data_size": 0 00:16:01.142 } 00:16:01.142 ] 00:16:01.142 }' 00:16:01.142 17:34:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:01.142 17:34:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.489 17:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:01.747 [2024-07-15 17:34:57.443395] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:01.747 [2024-07-15 17:34:57.443430] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10e52f634500 name Existed_Raid, state configuring 00:16:01.747 17:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:02.004 [2024-07-15 17:34:57.731417] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:02.004 [2024-07-15 17:34:57.732211] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:02.004 [2024-07-15 17:34:57.732251] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:02.004 [2024-07-15 17:34:57.732257] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:02.004 [2024-07-15 17:34:57.732265] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:02.004 [2024-07-15 17:34:57.732269] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:02.004 [2024-07-15 17:34:57.732276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:02.004 17:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:02.005 17:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:02.005 17:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:02.005 17:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:02.005 17:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:02.005 17:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:02.005 17:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:02.005 17:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:02.005 17:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:02.005 17:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:02.005 17:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:02.005 17:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:02.005 17:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:02.005 17:34:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:02.262 17:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:02.262 "name": "Existed_Raid", 00:16:02.262 "uuid": "8e26d123-42d0-11ef-96ac-773515fba644", 00:16:02.262 "strip_size_kb": 0, 00:16:02.262 "state": "configuring", 00:16:02.262 "raid_level": "raid1", 00:16:02.262 "superblock": true, 00:16:02.262 "num_base_bdevs": 4, 00:16:02.262 "num_base_bdevs_discovered": 1, 00:16:02.262 "num_base_bdevs_operational": 4, 00:16:02.262 "base_bdevs_list": [ 00:16:02.262 { 00:16:02.262 "name": "BaseBdev1", 00:16:02.262 "uuid": "8d2e6671-42d0-11ef-96ac-773515fba644", 00:16:02.262 "is_configured": true, 00:16:02.262 "data_offset": 2048, 00:16:02.262 "data_size": 63488 00:16:02.262 }, 00:16:02.262 { 00:16:02.262 "name": "BaseBdev2", 00:16:02.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.262 "is_configured": false, 00:16:02.262 "data_offset": 0, 00:16:02.262 "data_size": 0 00:16:02.262 }, 00:16:02.262 { 00:16:02.262 "name": "BaseBdev3", 00:16:02.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.262 "is_configured": false, 00:16:02.262 "data_offset": 0, 00:16:02.262 "data_size": 0 00:16:02.262 }, 00:16:02.262 { 00:16:02.262 "name": "BaseBdev4", 00:16:02.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.262 "is_configured": false, 00:16:02.262 "data_offset": 0, 00:16:02.262 "data_size": 0 00:16:02.262 } 00:16:02.262 ] 00:16:02.262 }' 00:16:02.262 17:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:02.262 17:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.826 17:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:02.826 [2024-07-15 17:34:58.619563] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:02.826 BaseBdev2 00:16:02.826 17:34:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:02.826 17:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:02.826 17:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:02.826 17:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:02.826 17:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:02.826 17:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:02.826 17:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:03.082 17:34:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:03.338 [ 00:16:03.338 { 00:16:03.338 "name": "BaseBdev2", 00:16:03.338 "aliases": [ 00:16:03.338 "8eae515d-42d0-11ef-96ac-773515fba644" 00:16:03.338 ], 00:16:03.338 "product_name": "Malloc disk", 00:16:03.338 "block_size": 512, 00:16:03.338 "num_blocks": 65536, 00:16:03.338 "uuid": "8eae515d-42d0-11ef-96ac-773515fba644", 00:16:03.338 "assigned_rate_limits": { 00:16:03.338 "rw_ios_per_sec": 0, 00:16:03.338 "rw_mbytes_per_sec": 0, 00:16:03.338 "r_mbytes_per_sec": 0, 00:16:03.338 "w_mbytes_per_sec": 0 00:16:03.338 }, 00:16:03.338 "claimed": true, 00:16:03.338 "claim_type": "exclusive_write", 00:16:03.338 "zoned": false, 00:16:03.338 "supported_io_types": { 00:16:03.338 "read": true, 00:16:03.338 "write": true, 00:16:03.338 "unmap": true, 00:16:03.338 "flush": true, 00:16:03.338 "reset": true, 00:16:03.338 "nvme_admin": false, 00:16:03.338 "nvme_io": false, 00:16:03.338 "nvme_io_md": false, 00:16:03.338 "write_zeroes": true, 00:16:03.338 "zcopy": true, 00:16:03.338 "get_zone_info": false, 00:16:03.338 "zone_management": false, 00:16:03.338 "zone_append": false, 00:16:03.338 "compare": false, 00:16:03.338 "compare_and_write": false, 00:16:03.338 "abort": true, 00:16:03.338 "seek_hole": false, 00:16:03.338 "seek_data": false, 00:16:03.338 "copy": true, 00:16:03.338 "nvme_iov_md": false 00:16:03.338 }, 00:16:03.338 "memory_domains": [ 00:16:03.338 { 00:16:03.338 "dma_device_id": "system", 00:16:03.338 "dma_device_type": 1 00:16:03.338 }, 00:16:03.338 { 00:16:03.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.338 "dma_device_type": 2 00:16:03.338 } 00:16:03.338 ], 00:16:03.338 "driver_specific": {} 00:16:03.338 } 00:16:03.338 ] 00:16:03.338 17:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:03.338 17:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:03.338 17:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:03.338 17:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:03.338 17:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:03.338 17:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:03.338 17:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:03.338 17:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:03.338 17:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:03.338 17:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:03.338 17:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:03.338 17:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:03.338 17:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:03.338 17:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.338 17:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.595 17:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:03.595 "name": "Existed_Raid", 00:16:03.595 "uuid": "8e26d123-42d0-11ef-96ac-773515fba644", 00:16:03.595 "strip_size_kb": 0, 00:16:03.595 "state": "configuring", 00:16:03.595 "raid_level": "raid1", 00:16:03.595 "superblock": true, 00:16:03.595 "num_base_bdevs": 4, 00:16:03.595 "num_base_bdevs_discovered": 2, 00:16:03.595 "num_base_bdevs_operational": 4, 00:16:03.595 "base_bdevs_list": [ 00:16:03.595 { 00:16:03.595 "name": "BaseBdev1", 00:16:03.595 "uuid": "8d2e6671-42d0-11ef-96ac-773515fba644", 00:16:03.595 "is_configured": true, 00:16:03.595 "data_offset": 2048, 00:16:03.595 "data_size": 63488 00:16:03.595 }, 00:16:03.595 { 00:16:03.595 "name": "BaseBdev2", 00:16:03.595 "uuid": "8eae515d-42d0-11ef-96ac-773515fba644", 00:16:03.595 "is_configured": true, 00:16:03.595 "data_offset": 2048, 00:16:03.595 "data_size": 63488 00:16:03.595 }, 00:16:03.595 { 00:16:03.595 "name": "BaseBdev3", 00:16:03.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.595 "is_configured": false, 00:16:03.595 "data_offset": 0, 00:16:03.595 "data_size": 0 00:16:03.595 }, 00:16:03.595 { 00:16:03.595 "name": "BaseBdev4", 00:16:03.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.595 "is_configured": false, 00:16:03.595 "data_offset": 0, 00:16:03.595 "data_size": 0 00:16:03.595 } 00:16:03.595 ] 00:16:03.595 }' 00:16:03.595 17:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:03.595 17:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.853 17:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:04.111 [2024-07-15 17:34:59.839577] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:04.111 BaseBdev3 00:16:04.111 17:34:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:16:04.111 17:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:04.111 17:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:04.111 17:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:04.111 17:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:04.111 17:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:04.111 17:34:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:04.369 17:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:04.627 [ 00:16:04.627 { 00:16:04.627 "name": "BaseBdev3", 00:16:04.627 "aliases": [ 00:16:04.627 "8f687abd-42d0-11ef-96ac-773515fba644" 00:16:04.627 ], 00:16:04.627 "product_name": "Malloc disk", 00:16:04.627 "block_size": 512, 00:16:04.627 "num_blocks": 65536, 00:16:04.627 "uuid": "8f687abd-42d0-11ef-96ac-773515fba644", 00:16:04.627 "assigned_rate_limits": { 00:16:04.627 "rw_ios_per_sec": 0, 00:16:04.627 "rw_mbytes_per_sec": 0, 00:16:04.627 "r_mbytes_per_sec": 0, 00:16:04.627 "w_mbytes_per_sec": 0 00:16:04.627 }, 00:16:04.627 "claimed": true, 00:16:04.627 "claim_type": "exclusive_write", 00:16:04.627 "zoned": false, 00:16:04.627 "supported_io_types": { 00:16:04.627 "read": true, 00:16:04.627 "write": true, 00:16:04.627 "unmap": true, 00:16:04.627 "flush": true, 00:16:04.627 "reset": true, 00:16:04.627 "nvme_admin": false, 00:16:04.627 "nvme_io": false, 00:16:04.627 "nvme_io_md": false, 00:16:04.627 "write_zeroes": true, 00:16:04.627 "zcopy": true, 00:16:04.627 "get_zone_info": false, 00:16:04.627 "zone_management": false, 00:16:04.627 "zone_append": false, 00:16:04.627 "compare": false, 00:16:04.627 "compare_and_write": false, 00:16:04.627 "abort": true, 00:16:04.627 "seek_hole": false, 00:16:04.627 "seek_data": false, 00:16:04.627 "copy": true, 00:16:04.627 "nvme_iov_md": false 00:16:04.627 }, 00:16:04.627 "memory_domains": [ 00:16:04.627 { 00:16:04.627 "dma_device_id": "system", 00:16:04.627 "dma_device_type": 1 00:16:04.627 }, 00:16:04.627 { 00:16:04.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:04.627 "dma_device_type": 2 00:16:04.627 } 00:16:04.627 ], 00:16:04.627 "driver_specific": {} 00:16:04.627 } 00:16:04.627 ] 00:16:04.627 17:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:04.627 17:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:04.627 17:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:04.627 17:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:04.627 17:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:04.627 17:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:04.627 17:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:04.627 17:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:04.627 17:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:04.627 17:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:04.627 17:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:04.627 17:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:04.627 17:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:04.627 17:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.627 17:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.884 17:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:04.884 "name": "Existed_Raid", 00:16:04.884 "uuid": "8e26d123-42d0-11ef-96ac-773515fba644", 00:16:04.884 "strip_size_kb": 0, 00:16:04.884 "state": "configuring", 00:16:04.884 "raid_level": "raid1", 00:16:04.884 "superblock": true, 00:16:04.884 "num_base_bdevs": 4, 00:16:04.884 "num_base_bdevs_discovered": 3, 00:16:04.884 "num_base_bdevs_operational": 4, 00:16:04.884 "base_bdevs_list": [ 00:16:04.884 { 00:16:04.884 "name": "BaseBdev1", 00:16:04.885 "uuid": "8d2e6671-42d0-11ef-96ac-773515fba644", 00:16:04.885 "is_configured": true, 00:16:04.885 "data_offset": 2048, 00:16:04.885 "data_size": 63488 00:16:04.885 }, 00:16:04.885 { 00:16:04.885 "name": "BaseBdev2", 00:16:04.885 "uuid": "8eae515d-42d0-11ef-96ac-773515fba644", 00:16:04.885 "is_configured": true, 00:16:04.885 "data_offset": 2048, 00:16:04.885 "data_size": 63488 00:16:04.885 }, 00:16:04.885 { 00:16:04.885 "name": "BaseBdev3", 00:16:04.885 "uuid": "8f687abd-42d0-11ef-96ac-773515fba644", 00:16:04.885 "is_configured": true, 00:16:04.885 "data_offset": 2048, 00:16:04.885 "data_size": 63488 00:16:04.885 }, 00:16:04.885 { 00:16:04.885 "name": "BaseBdev4", 00:16:04.885 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.885 "is_configured": false, 00:16:04.885 "data_offset": 0, 00:16:04.885 "data_size": 0 00:16:04.885 } 00:16:04.885 ] 00:16:04.885 }' 00:16:04.885 17:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:04.885 17:35:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.449 17:35:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:05.449 [2024-07-15 17:35:01.203598] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:05.449 [2024-07-15 17:35:01.203677] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x10e52f634a00 00:16:05.449 [2024-07-15 17:35:01.203683] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:05.449 [2024-07-15 17:35:01.203706] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10e52f697e20 00:16:05.449 [2024-07-15 17:35:01.203767] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10e52f634a00 00:16:05.449 [2024-07-15 17:35:01.203772] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x10e52f634a00 00:16:05.449 [2024-07-15 17:35:01.203793] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.449 BaseBdev4 00:16:05.449 17:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:16:05.449 17:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:16:05.449 17:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:05.449 17:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:05.449 17:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:05.449 17:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:05.449 17:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:05.706 17:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:05.965 [ 00:16:05.965 { 00:16:05.965 "name": "BaseBdev4", 00:16:05.965 "aliases": [ 00:16:05.965 "90389cd2-42d0-11ef-96ac-773515fba644" 00:16:05.965 ], 00:16:05.965 "product_name": "Malloc disk", 00:16:05.965 "block_size": 512, 00:16:05.965 "num_blocks": 65536, 00:16:05.965 "uuid": "90389cd2-42d0-11ef-96ac-773515fba644", 00:16:05.965 "assigned_rate_limits": { 00:16:05.965 "rw_ios_per_sec": 0, 00:16:05.965 "rw_mbytes_per_sec": 0, 00:16:05.965 "r_mbytes_per_sec": 0, 00:16:05.965 "w_mbytes_per_sec": 0 00:16:05.965 }, 00:16:05.965 "claimed": true, 00:16:05.965 "claim_type": "exclusive_write", 00:16:05.965 "zoned": false, 00:16:05.965 "supported_io_types": { 00:16:05.965 "read": true, 00:16:05.965 "write": true, 00:16:05.965 "unmap": true, 00:16:05.965 "flush": true, 00:16:05.965 "reset": true, 00:16:05.965 "nvme_admin": false, 00:16:05.965 "nvme_io": false, 00:16:05.965 "nvme_io_md": false, 00:16:05.965 "write_zeroes": true, 00:16:05.965 "zcopy": true, 00:16:05.965 "get_zone_info": false, 00:16:05.965 "zone_management": false, 00:16:05.965 "zone_append": false, 00:16:05.965 "compare": false, 00:16:05.965 "compare_and_write": false, 00:16:05.965 "abort": true, 00:16:05.965 "seek_hole": false, 00:16:05.965 "seek_data": false, 00:16:05.965 "copy": true, 00:16:05.965 "nvme_iov_md": false 00:16:05.965 }, 00:16:05.965 "memory_domains": [ 00:16:05.965 { 00:16:05.965 "dma_device_id": "system", 00:16:05.965 "dma_device_type": 1 00:16:05.965 }, 00:16:05.965 { 00:16:05.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.965 "dma_device_type": 2 00:16:05.965 } 00:16:05.965 ], 00:16:05.965 "driver_specific": {} 00:16:05.965 } 00:16:05.965 ] 00:16:05.965 17:35:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:05.965 17:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:05.965 17:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:05.965 17:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:05.965 17:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:05.965 17:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:05.965 17:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:05.965 17:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:05.965 17:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:05.965 17:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:05.965 17:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:05.965 17:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:05.965 17:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:05.965 17:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.965 17:35:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.545 17:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:06.545 "name": "Existed_Raid", 00:16:06.545 "uuid": "8e26d123-42d0-11ef-96ac-773515fba644", 00:16:06.545 "strip_size_kb": 0, 00:16:06.545 "state": "online", 00:16:06.545 "raid_level": "raid1", 00:16:06.545 "superblock": true, 00:16:06.545 "num_base_bdevs": 4, 00:16:06.545 "num_base_bdevs_discovered": 4, 00:16:06.545 "num_base_bdevs_operational": 4, 00:16:06.545 "base_bdevs_list": [ 00:16:06.545 { 00:16:06.545 "name": "BaseBdev1", 00:16:06.545 "uuid": "8d2e6671-42d0-11ef-96ac-773515fba644", 00:16:06.545 "is_configured": true, 00:16:06.545 "data_offset": 2048, 00:16:06.545 "data_size": 63488 00:16:06.545 }, 00:16:06.545 { 00:16:06.545 "name": "BaseBdev2", 00:16:06.545 "uuid": "8eae515d-42d0-11ef-96ac-773515fba644", 00:16:06.545 "is_configured": true, 00:16:06.545 "data_offset": 2048, 00:16:06.545 "data_size": 63488 00:16:06.545 }, 00:16:06.545 { 00:16:06.545 "name": "BaseBdev3", 00:16:06.545 "uuid": "8f687abd-42d0-11ef-96ac-773515fba644", 00:16:06.545 "is_configured": true, 00:16:06.545 "data_offset": 2048, 00:16:06.545 "data_size": 63488 00:16:06.545 }, 00:16:06.545 { 00:16:06.545 "name": "BaseBdev4", 00:16:06.545 "uuid": "90389cd2-42d0-11ef-96ac-773515fba644", 00:16:06.545 "is_configured": true, 00:16:06.545 "data_offset": 2048, 00:16:06.545 "data_size": 63488 00:16:06.545 } 00:16:06.545 ] 00:16:06.545 }' 00:16:06.545 17:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:06.545 17:35:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.802 17:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:06.802 17:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:06.802 17:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:06.802 17:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:06.802 17:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:06.802 17:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:06.802 17:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:06.802 17:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:07.060 [2024-07-15 17:35:02.643536] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:07.060 17:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:07.060 "name": "Existed_Raid", 00:16:07.060 "aliases": [ 00:16:07.060 "8e26d123-42d0-11ef-96ac-773515fba644" 00:16:07.060 ], 00:16:07.060 "product_name": "Raid Volume", 00:16:07.060 "block_size": 512, 00:16:07.060 "num_blocks": 63488, 00:16:07.060 "uuid": "8e26d123-42d0-11ef-96ac-773515fba644", 00:16:07.060 "assigned_rate_limits": { 00:16:07.060 "rw_ios_per_sec": 0, 00:16:07.060 "rw_mbytes_per_sec": 0, 00:16:07.060 "r_mbytes_per_sec": 0, 00:16:07.060 "w_mbytes_per_sec": 0 00:16:07.060 }, 00:16:07.060 "claimed": false, 00:16:07.060 "zoned": false, 00:16:07.060 "supported_io_types": { 00:16:07.060 "read": true, 00:16:07.060 "write": true, 00:16:07.060 "unmap": false, 00:16:07.060 "flush": false, 00:16:07.060 "reset": true, 00:16:07.060 "nvme_admin": false, 00:16:07.060 "nvme_io": false, 00:16:07.060 "nvme_io_md": false, 00:16:07.060 "write_zeroes": true, 00:16:07.060 "zcopy": false, 00:16:07.060 "get_zone_info": false, 00:16:07.060 "zone_management": false, 00:16:07.060 "zone_append": false, 00:16:07.060 "compare": false, 00:16:07.060 "compare_and_write": false, 00:16:07.060 "abort": false, 00:16:07.060 "seek_hole": false, 00:16:07.060 "seek_data": false, 00:16:07.060 "copy": false, 00:16:07.060 "nvme_iov_md": false 00:16:07.060 }, 00:16:07.060 "memory_domains": [ 00:16:07.060 { 00:16:07.060 "dma_device_id": "system", 00:16:07.060 "dma_device_type": 1 00:16:07.060 }, 00:16:07.060 { 00:16:07.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.060 "dma_device_type": 2 00:16:07.060 }, 00:16:07.060 { 00:16:07.060 "dma_device_id": "system", 00:16:07.060 "dma_device_type": 1 00:16:07.060 }, 00:16:07.060 { 00:16:07.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.060 "dma_device_type": 2 00:16:07.060 }, 00:16:07.060 { 00:16:07.060 "dma_device_id": "system", 00:16:07.060 "dma_device_type": 1 00:16:07.060 }, 00:16:07.060 { 00:16:07.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.060 "dma_device_type": 2 00:16:07.060 }, 00:16:07.060 { 00:16:07.060 "dma_device_id": "system", 00:16:07.060 "dma_device_type": 1 00:16:07.060 }, 00:16:07.060 { 00:16:07.060 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.060 "dma_device_type": 2 00:16:07.060 } 00:16:07.060 ], 00:16:07.060 "driver_specific": { 00:16:07.060 "raid": { 00:16:07.060 "uuid": "8e26d123-42d0-11ef-96ac-773515fba644", 00:16:07.060 "strip_size_kb": 0, 00:16:07.060 "state": "online", 00:16:07.060 "raid_level": "raid1", 00:16:07.060 "superblock": true, 00:16:07.060 "num_base_bdevs": 4, 00:16:07.060 "num_base_bdevs_discovered": 4, 00:16:07.060 "num_base_bdevs_operational": 4, 00:16:07.060 "base_bdevs_list": [ 00:16:07.060 { 00:16:07.060 "name": "BaseBdev1", 00:16:07.060 "uuid": "8d2e6671-42d0-11ef-96ac-773515fba644", 00:16:07.060 "is_configured": true, 00:16:07.060 "data_offset": 2048, 00:16:07.060 "data_size": 63488 00:16:07.060 }, 00:16:07.060 { 00:16:07.060 "name": "BaseBdev2", 00:16:07.060 "uuid": "8eae515d-42d0-11ef-96ac-773515fba644", 00:16:07.060 "is_configured": true, 00:16:07.060 "data_offset": 2048, 00:16:07.060 "data_size": 63488 00:16:07.060 }, 00:16:07.060 { 00:16:07.060 "name": "BaseBdev3", 00:16:07.060 "uuid": "8f687abd-42d0-11ef-96ac-773515fba644", 00:16:07.060 "is_configured": true, 00:16:07.060 "data_offset": 2048, 00:16:07.060 "data_size": 63488 00:16:07.060 }, 00:16:07.060 { 00:16:07.060 "name": "BaseBdev4", 00:16:07.060 "uuid": "90389cd2-42d0-11ef-96ac-773515fba644", 00:16:07.060 "is_configured": true, 00:16:07.060 "data_offset": 2048, 00:16:07.060 "data_size": 63488 00:16:07.060 } 00:16:07.060 ] 00:16:07.060 } 00:16:07.060 } 00:16:07.060 }' 00:16:07.060 17:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:07.060 17:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:07.060 BaseBdev2 00:16:07.060 BaseBdev3 00:16:07.060 BaseBdev4' 00:16:07.060 17:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:07.060 17:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:07.060 17:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:07.318 17:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:07.318 "name": "BaseBdev1", 00:16:07.318 "aliases": [ 00:16:07.318 "8d2e6671-42d0-11ef-96ac-773515fba644" 00:16:07.318 ], 00:16:07.318 "product_name": "Malloc disk", 00:16:07.318 "block_size": 512, 00:16:07.318 "num_blocks": 65536, 00:16:07.318 "uuid": "8d2e6671-42d0-11ef-96ac-773515fba644", 00:16:07.318 "assigned_rate_limits": { 00:16:07.318 "rw_ios_per_sec": 0, 00:16:07.318 "rw_mbytes_per_sec": 0, 00:16:07.318 "r_mbytes_per_sec": 0, 00:16:07.318 "w_mbytes_per_sec": 0 00:16:07.318 }, 00:16:07.318 "claimed": true, 00:16:07.318 "claim_type": "exclusive_write", 00:16:07.318 "zoned": false, 00:16:07.318 "supported_io_types": { 00:16:07.318 "read": true, 00:16:07.318 "write": true, 00:16:07.318 "unmap": true, 00:16:07.318 "flush": true, 00:16:07.318 "reset": true, 00:16:07.318 "nvme_admin": false, 00:16:07.318 "nvme_io": false, 00:16:07.318 "nvme_io_md": false, 00:16:07.318 "write_zeroes": true, 00:16:07.318 "zcopy": true, 00:16:07.318 "get_zone_info": false, 00:16:07.318 "zone_management": false, 00:16:07.318 "zone_append": false, 00:16:07.318 "compare": false, 00:16:07.318 "compare_and_write": false, 00:16:07.318 "abort": true, 00:16:07.318 "seek_hole": false, 00:16:07.318 "seek_data": false, 00:16:07.318 "copy": true, 00:16:07.318 "nvme_iov_md": false 00:16:07.318 }, 00:16:07.318 "memory_domains": [ 00:16:07.318 { 00:16:07.318 "dma_device_id": "system", 00:16:07.318 "dma_device_type": 1 00:16:07.318 }, 00:16:07.318 { 00:16:07.318 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.318 "dma_device_type": 2 00:16:07.318 } 00:16:07.318 ], 00:16:07.318 "driver_specific": {} 00:16:07.318 }' 00:16:07.318 17:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:07.318 17:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:07.318 17:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:07.318 17:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:07.318 17:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:07.318 17:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:07.318 17:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:07.318 17:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:07.318 17:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:07.318 17:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:07.318 17:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:07.318 17:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:07.318 17:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:07.318 17:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:07.318 17:35:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:07.575 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:07.575 "name": "BaseBdev2", 00:16:07.575 "aliases": [ 00:16:07.575 "8eae515d-42d0-11ef-96ac-773515fba644" 00:16:07.575 ], 00:16:07.575 "product_name": "Malloc disk", 00:16:07.575 "block_size": 512, 00:16:07.575 "num_blocks": 65536, 00:16:07.575 "uuid": "8eae515d-42d0-11ef-96ac-773515fba644", 00:16:07.575 "assigned_rate_limits": { 00:16:07.575 "rw_ios_per_sec": 0, 00:16:07.575 "rw_mbytes_per_sec": 0, 00:16:07.575 "r_mbytes_per_sec": 0, 00:16:07.575 "w_mbytes_per_sec": 0 00:16:07.575 }, 00:16:07.575 "claimed": true, 00:16:07.575 "claim_type": "exclusive_write", 00:16:07.575 "zoned": false, 00:16:07.575 "supported_io_types": { 00:16:07.575 "read": true, 00:16:07.575 "write": true, 00:16:07.575 "unmap": true, 00:16:07.575 "flush": true, 00:16:07.575 "reset": true, 00:16:07.575 "nvme_admin": false, 00:16:07.575 "nvme_io": false, 00:16:07.575 "nvme_io_md": false, 00:16:07.575 "write_zeroes": true, 00:16:07.575 "zcopy": true, 00:16:07.576 "get_zone_info": false, 00:16:07.576 "zone_management": false, 00:16:07.576 "zone_append": false, 00:16:07.576 "compare": false, 00:16:07.576 "compare_and_write": false, 00:16:07.576 "abort": true, 00:16:07.576 "seek_hole": false, 00:16:07.576 "seek_data": false, 00:16:07.576 "copy": true, 00:16:07.576 "nvme_iov_md": false 00:16:07.576 }, 00:16:07.576 "memory_domains": [ 00:16:07.576 { 00:16:07.576 "dma_device_id": "system", 00:16:07.576 "dma_device_type": 1 00:16:07.576 }, 00:16:07.576 { 00:16:07.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.576 "dma_device_type": 2 00:16:07.576 } 00:16:07.576 ], 00:16:07.576 "driver_specific": {} 00:16:07.576 }' 00:16:07.576 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:07.576 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:07.576 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:07.576 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:07.576 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:07.576 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:07.576 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:07.576 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:07.576 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:07.576 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:07.576 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:07.576 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:07.576 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:07.576 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:07.576 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:07.834 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:07.834 "name": "BaseBdev3", 00:16:07.834 "aliases": [ 00:16:07.834 "8f687abd-42d0-11ef-96ac-773515fba644" 00:16:07.834 ], 00:16:07.834 "product_name": "Malloc disk", 00:16:07.834 "block_size": 512, 00:16:07.834 "num_blocks": 65536, 00:16:07.834 "uuid": "8f687abd-42d0-11ef-96ac-773515fba644", 00:16:07.834 "assigned_rate_limits": { 00:16:07.834 "rw_ios_per_sec": 0, 00:16:07.834 "rw_mbytes_per_sec": 0, 00:16:07.834 "r_mbytes_per_sec": 0, 00:16:07.834 "w_mbytes_per_sec": 0 00:16:07.834 }, 00:16:07.834 "claimed": true, 00:16:07.834 "claim_type": "exclusive_write", 00:16:07.834 "zoned": false, 00:16:07.834 "supported_io_types": { 00:16:07.834 "read": true, 00:16:07.834 "write": true, 00:16:07.834 "unmap": true, 00:16:07.834 "flush": true, 00:16:07.834 "reset": true, 00:16:07.834 "nvme_admin": false, 00:16:07.834 "nvme_io": false, 00:16:07.834 "nvme_io_md": false, 00:16:07.834 "write_zeroes": true, 00:16:07.834 "zcopy": true, 00:16:07.834 "get_zone_info": false, 00:16:07.834 "zone_management": false, 00:16:07.834 "zone_append": false, 00:16:07.834 "compare": false, 00:16:07.834 "compare_and_write": false, 00:16:07.834 "abort": true, 00:16:07.834 "seek_hole": false, 00:16:07.834 "seek_data": false, 00:16:07.834 "copy": true, 00:16:07.834 "nvme_iov_md": false 00:16:07.834 }, 00:16:07.834 "memory_domains": [ 00:16:07.834 { 00:16:07.834 "dma_device_id": "system", 00:16:07.834 "dma_device_type": 1 00:16:07.834 }, 00:16:07.834 { 00:16:07.834 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:07.834 "dma_device_type": 2 00:16:07.834 } 00:16:07.834 ], 00:16:07.834 "driver_specific": {} 00:16:07.834 }' 00:16:07.834 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:07.834 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:07.834 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:07.834 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:07.834 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:07.834 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:07.834 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:07.834 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:07.834 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:07.834 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:07.834 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:07.834 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:07.834 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:07.834 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:07.834 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:08.092 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:08.092 "name": "BaseBdev4", 00:16:08.092 "aliases": [ 00:16:08.092 "90389cd2-42d0-11ef-96ac-773515fba644" 00:16:08.092 ], 00:16:08.092 "product_name": "Malloc disk", 00:16:08.092 "block_size": 512, 00:16:08.092 "num_blocks": 65536, 00:16:08.092 "uuid": "90389cd2-42d0-11ef-96ac-773515fba644", 00:16:08.092 "assigned_rate_limits": { 00:16:08.092 "rw_ios_per_sec": 0, 00:16:08.092 "rw_mbytes_per_sec": 0, 00:16:08.092 "r_mbytes_per_sec": 0, 00:16:08.092 "w_mbytes_per_sec": 0 00:16:08.092 }, 00:16:08.092 "claimed": true, 00:16:08.093 "claim_type": "exclusive_write", 00:16:08.093 "zoned": false, 00:16:08.093 "supported_io_types": { 00:16:08.093 "read": true, 00:16:08.093 "write": true, 00:16:08.093 "unmap": true, 00:16:08.093 "flush": true, 00:16:08.093 "reset": true, 00:16:08.093 "nvme_admin": false, 00:16:08.093 "nvme_io": false, 00:16:08.093 "nvme_io_md": false, 00:16:08.093 "write_zeroes": true, 00:16:08.093 "zcopy": true, 00:16:08.093 "get_zone_info": false, 00:16:08.093 "zone_management": false, 00:16:08.093 "zone_append": false, 00:16:08.093 "compare": false, 00:16:08.093 "compare_and_write": false, 00:16:08.093 "abort": true, 00:16:08.093 "seek_hole": false, 00:16:08.093 "seek_data": false, 00:16:08.093 "copy": true, 00:16:08.093 "nvme_iov_md": false 00:16:08.093 }, 00:16:08.093 "memory_domains": [ 00:16:08.093 { 00:16:08.093 "dma_device_id": "system", 00:16:08.093 "dma_device_type": 1 00:16:08.093 }, 00:16:08.093 { 00:16:08.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:08.093 "dma_device_type": 2 00:16:08.093 } 00:16:08.093 ], 00:16:08.093 "driver_specific": {} 00:16:08.093 }' 00:16:08.093 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:08.093 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:08.093 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:08.093 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:08.093 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:08.093 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:08.093 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:08.093 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:08.093 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:08.093 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:08.093 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:08.350 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:08.350 17:35:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:08.350 [2024-07-15 17:35:04.147534] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:08.351 17:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:08.351 17:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:16:08.351 17:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:08.351 17:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:16:08.351 17:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:16:08.351 17:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:16:08.351 17:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:08.351 17:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:08.351 17:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:08.351 17:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:08.351 17:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:08.351 17:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:08.351 17:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:08.351 17:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:08.351 17:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:08.351 17:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.351 17:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.917 17:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:08.917 "name": "Existed_Raid", 00:16:08.917 "uuid": "8e26d123-42d0-11ef-96ac-773515fba644", 00:16:08.917 "strip_size_kb": 0, 00:16:08.917 "state": "online", 00:16:08.917 "raid_level": "raid1", 00:16:08.917 "superblock": true, 00:16:08.917 "num_base_bdevs": 4, 00:16:08.917 "num_base_bdevs_discovered": 3, 00:16:08.917 "num_base_bdevs_operational": 3, 00:16:08.917 "base_bdevs_list": [ 00:16:08.917 { 00:16:08.917 "name": null, 00:16:08.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.917 "is_configured": false, 00:16:08.917 "data_offset": 2048, 00:16:08.917 "data_size": 63488 00:16:08.917 }, 00:16:08.917 { 00:16:08.917 "name": "BaseBdev2", 00:16:08.917 "uuid": "8eae515d-42d0-11ef-96ac-773515fba644", 00:16:08.917 "is_configured": true, 00:16:08.917 "data_offset": 2048, 00:16:08.917 "data_size": 63488 00:16:08.917 }, 00:16:08.917 { 00:16:08.917 "name": "BaseBdev3", 00:16:08.917 "uuid": "8f687abd-42d0-11ef-96ac-773515fba644", 00:16:08.917 "is_configured": true, 00:16:08.917 "data_offset": 2048, 00:16:08.917 "data_size": 63488 00:16:08.917 }, 00:16:08.917 { 00:16:08.917 "name": "BaseBdev4", 00:16:08.917 "uuid": "90389cd2-42d0-11ef-96ac-773515fba644", 00:16:08.917 "is_configured": true, 00:16:08.917 "data_offset": 2048, 00:16:08.917 "data_size": 63488 00:16:08.917 } 00:16:08.917 ] 00:16:08.917 }' 00:16:08.917 17:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:08.917 17:35:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.917 17:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:08.917 17:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:08.917 17:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.917 17:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:09.184 17:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:09.184 17:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:09.184 17:35:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:09.476 [2024-07-15 17:35:05.201639] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:09.476 17:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:09.476 17:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:09.476 17:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.476 17:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:09.734 17:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:09.734 17:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:09.734 17:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:09.992 [2024-07-15 17:35:05.659817] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:09.992 17:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:09.992 17:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:09.992 17:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:09.992 17:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.251 17:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:10.251 17:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:10.251 17:35:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:10.512 [2024-07-15 17:35:06.178487] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:10.512 [2024-07-15 17:35:06.178539] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:10.512 [2024-07-15 17:35:06.184412] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:10.512 [2024-07-15 17:35:06.184431] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:10.512 [2024-07-15 17:35:06.184436] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10e52f634a00 name Existed_Raid, state offline 00:16:10.512 17:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:10.512 17:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:10.512 17:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.512 17:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:10.776 17:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:10.776 17:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:10.776 17:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:16:10.776 17:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:10.776 17:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:10.776 17:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:11.041 BaseBdev2 00:16:11.041 17:35:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:11.041 17:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:11.041 17:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:11.041 17:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:11.041 17:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:11.041 17:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:11.041 17:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:11.309 17:35:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:11.579 [ 00:16:11.579 { 00:16:11.579 "name": "BaseBdev2", 00:16:11.579 "aliases": [ 00:16:11.579 "93813536-42d0-11ef-96ac-773515fba644" 00:16:11.579 ], 00:16:11.579 "product_name": "Malloc disk", 00:16:11.579 "block_size": 512, 00:16:11.580 "num_blocks": 65536, 00:16:11.580 "uuid": "93813536-42d0-11ef-96ac-773515fba644", 00:16:11.580 "assigned_rate_limits": { 00:16:11.580 "rw_ios_per_sec": 0, 00:16:11.580 "rw_mbytes_per_sec": 0, 00:16:11.580 "r_mbytes_per_sec": 0, 00:16:11.580 "w_mbytes_per_sec": 0 00:16:11.580 }, 00:16:11.580 "claimed": false, 00:16:11.580 "zoned": false, 00:16:11.580 "supported_io_types": { 00:16:11.580 "read": true, 00:16:11.580 "write": true, 00:16:11.580 "unmap": true, 00:16:11.580 "flush": true, 00:16:11.580 "reset": true, 00:16:11.580 "nvme_admin": false, 00:16:11.580 "nvme_io": false, 00:16:11.580 "nvme_io_md": false, 00:16:11.580 "write_zeroes": true, 00:16:11.580 "zcopy": true, 00:16:11.580 "get_zone_info": false, 00:16:11.580 "zone_management": false, 00:16:11.580 "zone_append": false, 00:16:11.580 "compare": false, 00:16:11.580 "compare_and_write": false, 00:16:11.580 "abort": true, 00:16:11.580 "seek_hole": false, 00:16:11.580 "seek_data": false, 00:16:11.580 "copy": true, 00:16:11.580 "nvme_iov_md": false 00:16:11.580 }, 00:16:11.580 "memory_domains": [ 00:16:11.580 { 00:16:11.580 "dma_device_id": "system", 00:16:11.580 "dma_device_type": 1 00:16:11.580 }, 00:16:11.580 { 00:16:11.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.580 "dma_device_type": 2 00:16:11.580 } 00:16:11.580 ], 00:16:11.580 "driver_specific": {} 00:16:11.580 } 00:16:11.580 ] 00:16:11.580 17:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:11.580 17:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:11.580 17:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:11.580 17:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:11.854 BaseBdev3 00:16:11.854 17:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:11.854 17:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:11.854 17:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:11.854 17:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:11.854 17:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:11.854 17:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:11.854 17:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:12.131 17:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:12.393 [ 00:16:12.393 { 00:16:12.393 "name": "BaseBdev3", 00:16:12.393 "aliases": [ 00:16:12.393 "93f5c94b-42d0-11ef-96ac-773515fba644" 00:16:12.393 ], 00:16:12.393 "product_name": "Malloc disk", 00:16:12.393 "block_size": 512, 00:16:12.393 "num_blocks": 65536, 00:16:12.393 "uuid": "93f5c94b-42d0-11ef-96ac-773515fba644", 00:16:12.393 "assigned_rate_limits": { 00:16:12.393 "rw_ios_per_sec": 0, 00:16:12.393 "rw_mbytes_per_sec": 0, 00:16:12.393 "r_mbytes_per_sec": 0, 00:16:12.393 "w_mbytes_per_sec": 0 00:16:12.393 }, 00:16:12.393 "claimed": false, 00:16:12.393 "zoned": false, 00:16:12.393 "supported_io_types": { 00:16:12.393 "read": true, 00:16:12.393 "write": true, 00:16:12.393 "unmap": true, 00:16:12.393 "flush": true, 00:16:12.393 "reset": true, 00:16:12.393 "nvme_admin": false, 00:16:12.393 "nvme_io": false, 00:16:12.393 "nvme_io_md": false, 00:16:12.393 "write_zeroes": true, 00:16:12.393 "zcopy": true, 00:16:12.393 "get_zone_info": false, 00:16:12.393 "zone_management": false, 00:16:12.393 "zone_append": false, 00:16:12.393 "compare": false, 00:16:12.393 "compare_and_write": false, 00:16:12.393 "abort": true, 00:16:12.393 "seek_hole": false, 00:16:12.393 "seek_data": false, 00:16:12.393 "copy": true, 00:16:12.393 "nvme_iov_md": false 00:16:12.393 }, 00:16:12.393 "memory_domains": [ 00:16:12.393 { 00:16:12.393 "dma_device_id": "system", 00:16:12.393 "dma_device_type": 1 00:16:12.393 }, 00:16:12.393 { 00:16:12.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.393 "dma_device_type": 2 00:16:12.393 } 00:16:12.393 ], 00:16:12.393 "driver_specific": {} 00:16:12.393 } 00:16:12.393 ] 00:16:12.393 17:35:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:12.393 17:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:12.393 17:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:12.393 17:35:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:12.651 BaseBdev4 00:16:12.651 17:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:16:12.651 17:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:16:12.651 17:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:12.651 17:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:12.651 17:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:12.651 17:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:12.651 17:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:12.910 17:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:12.910 [ 00:16:12.910 { 00:16:12.910 "name": "BaseBdev4", 00:16:12.910 "aliases": [ 00:16:12.910 "94674fb2-42d0-11ef-96ac-773515fba644" 00:16:12.910 ], 00:16:12.910 "product_name": "Malloc disk", 00:16:12.910 "block_size": 512, 00:16:12.910 "num_blocks": 65536, 00:16:12.910 "uuid": "94674fb2-42d0-11ef-96ac-773515fba644", 00:16:12.910 "assigned_rate_limits": { 00:16:12.910 "rw_ios_per_sec": 0, 00:16:12.910 "rw_mbytes_per_sec": 0, 00:16:12.910 "r_mbytes_per_sec": 0, 00:16:12.910 "w_mbytes_per_sec": 0 00:16:12.910 }, 00:16:12.910 "claimed": false, 00:16:12.910 "zoned": false, 00:16:12.910 "supported_io_types": { 00:16:12.910 "read": true, 00:16:12.910 "write": true, 00:16:12.910 "unmap": true, 00:16:12.910 "flush": true, 00:16:12.910 "reset": true, 00:16:12.910 "nvme_admin": false, 00:16:12.910 "nvme_io": false, 00:16:12.910 "nvme_io_md": false, 00:16:12.910 "write_zeroes": true, 00:16:12.910 "zcopy": true, 00:16:12.910 "get_zone_info": false, 00:16:12.910 "zone_management": false, 00:16:12.910 "zone_append": false, 00:16:12.910 "compare": false, 00:16:12.910 "compare_and_write": false, 00:16:12.910 "abort": true, 00:16:12.910 "seek_hole": false, 00:16:12.910 "seek_data": false, 00:16:12.910 "copy": true, 00:16:12.910 "nvme_iov_md": false 00:16:12.910 }, 00:16:12.910 "memory_domains": [ 00:16:12.910 { 00:16:12.910 "dma_device_id": "system", 00:16:12.910 "dma_device_type": 1 00:16:12.910 }, 00:16:12.910 { 00:16:12.910 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.910 "dma_device_type": 2 00:16:12.910 } 00:16:12.910 ], 00:16:12.910 "driver_specific": {} 00:16:12.910 } 00:16:12.910 ] 00:16:13.168 17:35:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:13.168 17:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:13.168 17:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:13.168 17:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:13.168 [2024-07-15 17:35:08.960421] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:13.168 [2024-07-15 17:35:08.960471] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:13.168 [2024-07-15 17:35:08.960480] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:13.168 [2024-07-15 17:35:08.961054] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:13.168 [2024-07-15 17:35:08.961075] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:13.168 17:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:13.168 17:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:13.168 17:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:13.168 17:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:13.168 17:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:13.168 17:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:13.168 17:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:13.168 17:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:13.168 17:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:13.168 17:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:13.168 17:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:13.168 17:35:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.426 17:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:13.426 "name": "Existed_Raid", 00:16:13.426 "uuid": "94d83a3b-42d0-11ef-96ac-773515fba644", 00:16:13.426 "strip_size_kb": 0, 00:16:13.426 "state": "configuring", 00:16:13.426 "raid_level": "raid1", 00:16:13.426 "superblock": true, 00:16:13.426 "num_base_bdevs": 4, 00:16:13.426 "num_base_bdevs_discovered": 3, 00:16:13.426 "num_base_bdevs_operational": 4, 00:16:13.426 "base_bdevs_list": [ 00:16:13.426 { 00:16:13.427 "name": "BaseBdev1", 00:16:13.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.427 "is_configured": false, 00:16:13.427 "data_offset": 0, 00:16:13.427 "data_size": 0 00:16:13.427 }, 00:16:13.427 { 00:16:13.427 "name": "BaseBdev2", 00:16:13.427 "uuid": "93813536-42d0-11ef-96ac-773515fba644", 00:16:13.427 "is_configured": true, 00:16:13.427 "data_offset": 2048, 00:16:13.427 "data_size": 63488 00:16:13.427 }, 00:16:13.427 { 00:16:13.427 "name": "BaseBdev3", 00:16:13.427 "uuid": "93f5c94b-42d0-11ef-96ac-773515fba644", 00:16:13.427 "is_configured": true, 00:16:13.427 "data_offset": 2048, 00:16:13.427 "data_size": 63488 00:16:13.427 }, 00:16:13.427 { 00:16:13.427 "name": "BaseBdev4", 00:16:13.427 "uuid": "94674fb2-42d0-11ef-96ac-773515fba644", 00:16:13.427 "is_configured": true, 00:16:13.427 "data_offset": 2048, 00:16:13.427 "data_size": 63488 00:16:13.427 } 00:16:13.427 ] 00:16:13.427 }' 00:16:13.427 17:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:13.427 17:35:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.993 17:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:13.993 [2024-07-15 17:35:09.776432] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:13.993 17:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:13.993 17:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:13.993 17:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:13.993 17:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:13.993 17:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:13.993 17:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:13.993 17:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:13.993 17:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:13.993 17:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:13.993 17:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:13.993 17:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:13.993 17:35:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:14.252 17:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:14.252 "name": "Existed_Raid", 00:16:14.252 "uuid": "94d83a3b-42d0-11ef-96ac-773515fba644", 00:16:14.252 "strip_size_kb": 0, 00:16:14.252 "state": "configuring", 00:16:14.252 "raid_level": "raid1", 00:16:14.252 "superblock": true, 00:16:14.252 "num_base_bdevs": 4, 00:16:14.252 "num_base_bdevs_discovered": 2, 00:16:14.252 "num_base_bdevs_operational": 4, 00:16:14.252 "base_bdevs_list": [ 00:16:14.252 { 00:16:14.252 "name": "BaseBdev1", 00:16:14.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.252 "is_configured": false, 00:16:14.252 "data_offset": 0, 00:16:14.252 "data_size": 0 00:16:14.252 }, 00:16:14.252 { 00:16:14.252 "name": null, 00:16:14.252 "uuid": "93813536-42d0-11ef-96ac-773515fba644", 00:16:14.252 "is_configured": false, 00:16:14.252 "data_offset": 2048, 00:16:14.252 "data_size": 63488 00:16:14.252 }, 00:16:14.252 { 00:16:14.252 "name": "BaseBdev3", 00:16:14.252 "uuid": "93f5c94b-42d0-11ef-96ac-773515fba644", 00:16:14.252 "is_configured": true, 00:16:14.252 "data_offset": 2048, 00:16:14.252 "data_size": 63488 00:16:14.252 }, 00:16:14.252 { 00:16:14.252 "name": "BaseBdev4", 00:16:14.252 "uuid": "94674fb2-42d0-11ef-96ac-773515fba644", 00:16:14.252 "is_configured": true, 00:16:14.252 "data_offset": 2048, 00:16:14.252 "data_size": 63488 00:16:14.252 } 00:16:14.252 ] 00:16:14.252 }' 00:16:14.252 17:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:14.252 17:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.511 17:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:14.511 17:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:15.077 17:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:15.077 17:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:15.077 [2024-07-15 17:35:10.864620] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:15.077 BaseBdev1 00:16:15.077 17:35:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:15.077 17:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:15.077 17:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:15.077 17:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:15.077 17:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:15.077 17:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:15.077 17:35:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:15.335 17:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:15.594 [ 00:16:15.594 { 00:16:15.594 "name": "BaseBdev1", 00:16:15.594 "aliases": [ 00:16:15.594 "95fac2fd-42d0-11ef-96ac-773515fba644" 00:16:15.594 ], 00:16:15.594 "product_name": "Malloc disk", 00:16:15.594 "block_size": 512, 00:16:15.594 "num_blocks": 65536, 00:16:15.594 "uuid": "95fac2fd-42d0-11ef-96ac-773515fba644", 00:16:15.594 "assigned_rate_limits": { 00:16:15.594 "rw_ios_per_sec": 0, 00:16:15.594 "rw_mbytes_per_sec": 0, 00:16:15.594 "r_mbytes_per_sec": 0, 00:16:15.594 "w_mbytes_per_sec": 0 00:16:15.594 }, 00:16:15.594 "claimed": true, 00:16:15.594 "claim_type": "exclusive_write", 00:16:15.594 "zoned": false, 00:16:15.594 "supported_io_types": { 00:16:15.594 "read": true, 00:16:15.594 "write": true, 00:16:15.594 "unmap": true, 00:16:15.594 "flush": true, 00:16:15.594 "reset": true, 00:16:15.594 "nvme_admin": false, 00:16:15.594 "nvme_io": false, 00:16:15.594 "nvme_io_md": false, 00:16:15.594 "write_zeroes": true, 00:16:15.594 "zcopy": true, 00:16:15.594 "get_zone_info": false, 00:16:15.594 "zone_management": false, 00:16:15.594 "zone_append": false, 00:16:15.594 "compare": false, 00:16:15.594 "compare_and_write": false, 00:16:15.594 "abort": true, 00:16:15.594 "seek_hole": false, 00:16:15.594 "seek_data": false, 00:16:15.594 "copy": true, 00:16:15.594 "nvme_iov_md": false 00:16:15.594 }, 00:16:15.594 "memory_domains": [ 00:16:15.594 { 00:16:15.594 "dma_device_id": "system", 00:16:15.594 "dma_device_type": 1 00:16:15.594 }, 00:16:15.594 { 00:16:15.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:15.594 "dma_device_type": 2 00:16:15.594 } 00:16:15.594 ], 00:16:15.594 "driver_specific": {} 00:16:15.594 } 00:16:15.594 ] 00:16:15.594 17:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:15.594 17:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:15.594 17:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:15.594 17:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:15.594 17:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:15.594 17:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:15.594 17:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:15.594 17:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:15.594 17:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:15.594 17:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:15.594 17:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:15.594 17:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:15.594 17:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:15.852 17:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:15.852 "name": "Existed_Raid", 00:16:15.852 "uuid": "94d83a3b-42d0-11ef-96ac-773515fba644", 00:16:15.852 "strip_size_kb": 0, 00:16:15.852 "state": "configuring", 00:16:15.852 "raid_level": "raid1", 00:16:15.852 "superblock": true, 00:16:15.852 "num_base_bdevs": 4, 00:16:15.852 "num_base_bdevs_discovered": 3, 00:16:15.852 "num_base_bdevs_operational": 4, 00:16:15.852 "base_bdevs_list": [ 00:16:15.852 { 00:16:15.852 "name": "BaseBdev1", 00:16:15.852 "uuid": "95fac2fd-42d0-11ef-96ac-773515fba644", 00:16:15.852 "is_configured": true, 00:16:15.852 "data_offset": 2048, 00:16:15.852 "data_size": 63488 00:16:15.852 }, 00:16:15.852 { 00:16:15.852 "name": null, 00:16:15.852 "uuid": "93813536-42d0-11ef-96ac-773515fba644", 00:16:15.852 "is_configured": false, 00:16:15.852 "data_offset": 2048, 00:16:15.852 "data_size": 63488 00:16:15.852 }, 00:16:15.852 { 00:16:15.852 "name": "BaseBdev3", 00:16:15.852 "uuid": "93f5c94b-42d0-11ef-96ac-773515fba644", 00:16:15.852 "is_configured": true, 00:16:15.852 "data_offset": 2048, 00:16:15.852 "data_size": 63488 00:16:15.852 }, 00:16:15.852 { 00:16:15.852 "name": "BaseBdev4", 00:16:15.852 "uuid": "94674fb2-42d0-11ef-96ac-773515fba644", 00:16:15.852 "is_configured": true, 00:16:15.852 "data_offset": 2048, 00:16:15.852 "data_size": 63488 00:16:15.852 } 00:16:15.852 ] 00:16:15.852 }' 00:16:15.852 17:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:15.852 17:35:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.111 17:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.111 17:35:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:16.370 17:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:16:16.370 17:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:16.700 [2024-07-15 17:35:12.432489] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:16.700 17:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:16.700 17:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:16.700 17:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:16.700 17:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:16.700 17:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:16.700 17:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:16.700 17:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:16.700 17:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:16.700 17:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:16.700 17:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:16.700 17:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.700 17:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:16.956 17:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:16.956 "name": "Existed_Raid", 00:16:16.956 "uuid": "94d83a3b-42d0-11ef-96ac-773515fba644", 00:16:16.956 "strip_size_kb": 0, 00:16:16.956 "state": "configuring", 00:16:16.956 "raid_level": "raid1", 00:16:16.956 "superblock": true, 00:16:16.956 "num_base_bdevs": 4, 00:16:16.956 "num_base_bdevs_discovered": 2, 00:16:16.956 "num_base_bdevs_operational": 4, 00:16:16.956 "base_bdevs_list": [ 00:16:16.956 { 00:16:16.956 "name": "BaseBdev1", 00:16:16.957 "uuid": "95fac2fd-42d0-11ef-96ac-773515fba644", 00:16:16.957 "is_configured": true, 00:16:16.957 "data_offset": 2048, 00:16:16.957 "data_size": 63488 00:16:16.957 }, 00:16:16.957 { 00:16:16.957 "name": null, 00:16:16.957 "uuid": "93813536-42d0-11ef-96ac-773515fba644", 00:16:16.957 "is_configured": false, 00:16:16.957 "data_offset": 2048, 00:16:16.957 "data_size": 63488 00:16:16.957 }, 00:16:16.957 { 00:16:16.957 "name": null, 00:16:16.957 "uuid": "93f5c94b-42d0-11ef-96ac-773515fba644", 00:16:16.957 "is_configured": false, 00:16:16.957 "data_offset": 2048, 00:16:16.957 "data_size": 63488 00:16:16.957 }, 00:16:16.957 { 00:16:16.957 "name": "BaseBdev4", 00:16:16.957 "uuid": "94674fb2-42d0-11ef-96ac-773515fba644", 00:16:16.957 "is_configured": true, 00:16:16.957 "data_offset": 2048, 00:16:16.957 "data_size": 63488 00:16:16.957 } 00:16:16.957 ] 00:16:16.957 }' 00:16:16.957 17:35:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:16.957 17:35:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:17.213 17:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.213 17:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:17.780 17:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:17.780 17:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:17.780 [2024-07-15 17:35:13.536512] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:17.780 17:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:17.780 17:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:17.780 17:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:17.780 17:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:17.780 17:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:17.780 17:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:17.780 17:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:17.780 17:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:17.780 17:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:17.780 17:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:17.780 17:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:17.780 17:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:18.039 17:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:18.039 "name": "Existed_Raid", 00:16:18.039 "uuid": "94d83a3b-42d0-11ef-96ac-773515fba644", 00:16:18.039 "strip_size_kb": 0, 00:16:18.039 "state": "configuring", 00:16:18.039 "raid_level": "raid1", 00:16:18.039 "superblock": true, 00:16:18.039 "num_base_bdevs": 4, 00:16:18.039 "num_base_bdevs_discovered": 3, 00:16:18.039 "num_base_bdevs_operational": 4, 00:16:18.039 "base_bdevs_list": [ 00:16:18.039 { 00:16:18.039 "name": "BaseBdev1", 00:16:18.039 "uuid": "95fac2fd-42d0-11ef-96ac-773515fba644", 00:16:18.039 "is_configured": true, 00:16:18.039 "data_offset": 2048, 00:16:18.039 "data_size": 63488 00:16:18.039 }, 00:16:18.039 { 00:16:18.039 "name": null, 00:16:18.039 "uuid": "93813536-42d0-11ef-96ac-773515fba644", 00:16:18.039 "is_configured": false, 00:16:18.039 "data_offset": 2048, 00:16:18.039 "data_size": 63488 00:16:18.039 }, 00:16:18.039 { 00:16:18.039 "name": "BaseBdev3", 00:16:18.039 "uuid": "93f5c94b-42d0-11ef-96ac-773515fba644", 00:16:18.039 "is_configured": true, 00:16:18.039 "data_offset": 2048, 00:16:18.039 "data_size": 63488 00:16:18.039 }, 00:16:18.039 { 00:16:18.039 "name": "BaseBdev4", 00:16:18.039 "uuid": "94674fb2-42d0-11ef-96ac-773515fba644", 00:16:18.039 "is_configured": true, 00:16:18.039 "data_offset": 2048, 00:16:18.039 "data_size": 63488 00:16:18.039 } 00:16:18.039 ] 00:16:18.039 }' 00:16:18.039 17:35:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:18.039 17:35:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:18.297 17:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.297 17:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:18.555 17:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:18.555 17:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:18.814 [2024-07-15 17:35:14.568536] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:18.814 17:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:18.814 17:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:18.814 17:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:18.814 17:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:18.814 17:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:18.814 17:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:18.814 17:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:18.814 17:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:18.814 17:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:18.814 17:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:18.814 17:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:18.814 17:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:19.090 17:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:19.090 "name": "Existed_Raid", 00:16:19.090 "uuid": "94d83a3b-42d0-11ef-96ac-773515fba644", 00:16:19.090 "strip_size_kb": 0, 00:16:19.090 "state": "configuring", 00:16:19.090 "raid_level": "raid1", 00:16:19.090 "superblock": true, 00:16:19.090 "num_base_bdevs": 4, 00:16:19.090 "num_base_bdevs_discovered": 2, 00:16:19.090 "num_base_bdevs_operational": 4, 00:16:19.090 "base_bdevs_list": [ 00:16:19.090 { 00:16:19.090 "name": null, 00:16:19.090 "uuid": "95fac2fd-42d0-11ef-96ac-773515fba644", 00:16:19.090 "is_configured": false, 00:16:19.090 "data_offset": 2048, 00:16:19.090 "data_size": 63488 00:16:19.090 }, 00:16:19.090 { 00:16:19.090 "name": null, 00:16:19.090 "uuid": "93813536-42d0-11ef-96ac-773515fba644", 00:16:19.090 "is_configured": false, 00:16:19.090 "data_offset": 2048, 00:16:19.090 "data_size": 63488 00:16:19.090 }, 00:16:19.090 { 00:16:19.090 "name": "BaseBdev3", 00:16:19.090 "uuid": "93f5c94b-42d0-11ef-96ac-773515fba644", 00:16:19.090 "is_configured": true, 00:16:19.090 "data_offset": 2048, 00:16:19.090 "data_size": 63488 00:16:19.090 }, 00:16:19.090 { 00:16:19.090 "name": "BaseBdev4", 00:16:19.090 "uuid": "94674fb2-42d0-11ef-96ac-773515fba644", 00:16:19.090 "is_configured": true, 00:16:19.090 "data_offset": 2048, 00:16:19.090 "data_size": 63488 00:16:19.090 } 00:16:19.090 ] 00:16:19.090 }' 00:16:19.090 17:35:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:19.090 17:35:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:19.348 17:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.348 17:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:19.678 17:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:19.678 17:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:19.937 [2024-07-15 17:35:15.594386] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:19.937 17:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:19.937 17:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:19.937 17:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:19.937 17:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:19.937 17:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:19.937 17:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:19.937 17:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:19.937 17:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:19.937 17:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:19.937 17:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:19.937 17:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.937 17:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:20.228 17:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:20.229 "name": "Existed_Raid", 00:16:20.229 "uuid": "94d83a3b-42d0-11ef-96ac-773515fba644", 00:16:20.229 "strip_size_kb": 0, 00:16:20.229 "state": "configuring", 00:16:20.229 "raid_level": "raid1", 00:16:20.229 "superblock": true, 00:16:20.229 "num_base_bdevs": 4, 00:16:20.229 "num_base_bdevs_discovered": 3, 00:16:20.229 "num_base_bdevs_operational": 4, 00:16:20.229 "base_bdevs_list": [ 00:16:20.229 { 00:16:20.229 "name": null, 00:16:20.229 "uuid": "95fac2fd-42d0-11ef-96ac-773515fba644", 00:16:20.229 "is_configured": false, 00:16:20.229 "data_offset": 2048, 00:16:20.229 "data_size": 63488 00:16:20.229 }, 00:16:20.229 { 00:16:20.229 "name": "BaseBdev2", 00:16:20.229 "uuid": "93813536-42d0-11ef-96ac-773515fba644", 00:16:20.229 "is_configured": true, 00:16:20.229 "data_offset": 2048, 00:16:20.229 "data_size": 63488 00:16:20.229 }, 00:16:20.229 { 00:16:20.229 "name": "BaseBdev3", 00:16:20.229 "uuid": "93f5c94b-42d0-11ef-96ac-773515fba644", 00:16:20.229 "is_configured": true, 00:16:20.229 "data_offset": 2048, 00:16:20.229 "data_size": 63488 00:16:20.229 }, 00:16:20.229 { 00:16:20.229 "name": "BaseBdev4", 00:16:20.229 "uuid": "94674fb2-42d0-11ef-96ac-773515fba644", 00:16:20.229 "is_configured": true, 00:16:20.229 "data_offset": 2048, 00:16:20.229 "data_size": 63488 00:16:20.229 } 00:16:20.229 ] 00:16:20.229 }' 00:16:20.229 17:35:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:20.229 17:35:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:20.488 17:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:20.488 17:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:20.746 17:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:16:20.746 17:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:20.746 17:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.004 17:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 95fac2fd-42d0-11ef-96ac-773515fba644 00:16:21.262 [2024-07-15 17:35:16.946558] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:21.262 [2024-07-15 17:35:16.946629] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x10e52f634f00 00:16:21.262 [2024-07-15 17:35:16.946634] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:21.262 [2024-07-15 17:35:16.946655] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10e52f697e20 00:16:21.262 [2024-07-15 17:35:16.946703] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10e52f634f00 00:16:21.262 [2024-07-15 17:35:16.946707] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x10e52f634f00 00:16:21.262 [2024-07-15 17:35:16.946727] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:21.262 NewBaseBdev 00:16:21.262 17:35:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:16:21.262 17:35:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:16:21.262 17:35:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:21.262 17:35:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:21.262 17:35:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:21.262 17:35:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:21.262 17:35:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:21.520 17:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:21.778 [ 00:16:21.778 { 00:16:21.778 "name": "NewBaseBdev", 00:16:21.778 "aliases": [ 00:16:21.778 "95fac2fd-42d0-11ef-96ac-773515fba644" 00:16:21.778 ], 00:16:21.778 "product_name": "Malloc disk", 00:16:21.778 "block_size": 512, 00:16:21.778 "num_blocks": 65536, 00:16:21.778 "uuid": "95fac2fd-42d0-11ef-96ac-773515fba644", 00:16:21.778 "assigned_rate_limits": { 00:16:21.778 "rw_ios_per_sec": 0, 00:16:21.778 "rw_mbytes_per_sec": 0, 00:16:21.778 "r_mbytes_per_sec": 0, 00:16:21.778 "w_mbytes_per_sec": 0 00:16:21.778 }, 00:16:21.778 "claimed": true, 00:16:21.778 "claim_type": "exclusive_write", 00:16:21.778 "zoned": false, 00:16:21.778 "supported_io_types": { 00:16:21.778 "read": true, 00:16:21.778 "write": true, 00:16:21.778 "unmap": true, 00:16:21.778 "flush": true, 00:16:21.778 "reset": true, 00:16:21.778 "nvme_admin": false, 00:16:21.778 "nvme_io": false, 00:16:21.778 "nvme_io_md": false, 00:16:21.778 "write_zeroes": true, 00:16:21.778 "zcopy": true, 00:16:21.778 "get_zone_info": false, 00:16:21.778 "zone_management": false, 00:16:21.778 "zone_append": false, 00:16:21.778 "compare": false, 00:16:21.778 "compare_and_write": false, 00:16:21.778 "abort": true, 00:16:21.778 "seek_hole": false, 00:16:21.778 "seek_data": false, 00:16:21.778 "copy": true, 00:16:21.779 "nvme_iov_md": false 00:16:21.779 }, 00:16:21.779 "memory_domains": [ 00:16:21.779 { 00:16:21.779 "dma_device_id": "system", 00:16:21.779 "dma_device_type": 1 00:16:21.779 }, 00:16:21.779 { 00:16:21.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:21.779 "dma_device_type": 2 00:16:21.779 } 00:16:21.779 ], 00:16:21.779 "driver_specific": {} 00:16:21.779 } 00:16:21.779 ] 00:16:21.779 17:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:21.779 17:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:21.779 17:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:21.779 17:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:21.779 17:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:21.779 17:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:21.779 17:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:21.779 17:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:21.779 17:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:21.779 17:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:21.779 17:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:21.779 17:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.779 17:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:22.038 17:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:22.038 "name": "Existed_Raid", 00:16:22.038 "uuid": "94d83a3b-42d0-11ef-96ac-773515fba644", 00:16:22.038 "strip_size_kb": 0, 00:16:22.038 "state": "online", 00:16:22.038 "raid_level": "raid1", 00:16:22.038 "superblock": true, 00:16:22.038 "num_base_bdevs": 4, 00:16:22.038 "num_base_bdevs_discovered": 4, 00:16:22.038 "num_base_bdevs_operational": 4, 00:16:22.038 "base_bdevs_list": [ 00:16:22.038 { 00:16:22.038 "name": "NewBaseBdev", 00:16:22.038 "uuid": "95fac2fd-42d0-11ef-96ac-773515fba644", 00:16:22.038 "is_configured": true, 00:16:22.038 "data_offset": 2048, 00:16:22.038 "data_size": 63488 00:16:22.038 }, 00:16:22.038 { 00:16:22.038 "name": "BaseBdev2", 00:16:22.038 "uuid": "93813536-42d0-11ef-96ac-773515fba644", 00:16:22.038 "is_configured": true, 00:16:22.038 "data_offset": 2048, 00:16:22.038 "data_size": 63488 00:16:22.038 }, 00:16:22.038 { 00:16:22.038 "name": "BaseBdev3", 00:16:22.038 "uuid": "93f5c94b-42d0-11ef-96ac-773515fba644", 00:16:22.038 "is_configured": true, 00:16:22.038 "data_offset": 2048, 00:16:22.038 "data_size": 63488 00:16:22.038 }, 00:16:22.038 { 00:16:22.038 "name": "BaseBdev4", 00:16:22.038 "uuid": "94674fb2-42d0-11ef-96ac-773515fba644", 00:16:22.038 "is_configured": true, 00:16:22.038 "data_offset": 2048, 00:16:22.038 "data_size": 63488 00:16:22.038 } 00:16:22.038 ] 00:16:22.038 }' 00:16:22.038 17:35:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:22.038 17:35:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:22.297 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:16:22.297 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:22.297 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:22.297 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:22.297 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:22.297 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:22.297 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:22.297 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:22.555 [2024-07-15 17:35:18.330493] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:22.555 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:22.555 "name": "Existed_Raid", 00:16:22.555 "aliases": [ 00:16:22.555 "94d83a3b-42d0-11ef-96ac-773515fba644" 00:16:22.555 ], 00:16:22.555 "product_name": "Raid Volume", 00:16:22.555 "block_size": 512, 00:16:22.555 "num_blocks": 63488, 00:16:22.555 "uuid": "94d83a3b-42d0-11ef-96ac-773515fba644", 00:16:22.555 "assigned_rate_limits": { 00:16:22.555 "rw_ios_per_sec": 0, 00:16:22.555 "rw_mbytes_per_sec": 0, 00:16:22.555 "r_mbytes_per_sec": 0, 00:16:22.555 "w_mbytes_per_sec": 0 00:16:22.555 }, 00:16:22.555 "claimed": false, 00:16:22.555 "zoned": false, 00:16:22.555 "supported_io_types": { 00:16:22.555 "read": true, 00:16:22.555 "write": true, 00:16:22.555 "unmap": false, 00:16:22.555 "flush": false, 00:16:22.555 "reset": true, 00:16:22.555 "nvme_admin": false, 00:16:22.555 "nvme_io": false, 00:16:22.555 "nvme_io_md": false, 00:16:22.555 "write_zeroes": true, 00:16:22.555 "zcopy": false, 00:16:22.555 "get_zone_info": false, 00:16:22.555 "zone_management": false, 00:16:22.555 "zone_append": false, 00:16:22.555 "compare": false, 00:16:22.555 "compare_and_write": false, 00:16:22.555 "abort": false, 00:16:22.555 "seek_hole": false, 00:16:22.555 "seek_data": false, 00:16:22.555 "copy": false, 00:16:22.555 "nvme_iov_md": false 00:16:22.555 }, 00:16:22.555 "memory_domains": [ 00:16:22.555 { 00:16:22.555 "dma_device_id": "system", 00:16:22.555 "dma_device_type": 1 00:16:22.555 }, 00:16:22.555 { 00:16:22.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.555 "dma_device_type": 2 00:16:22.555 }, 00:16:22.555 { 00:16:22.555 "dma_device_id": "system", 00:16:22.555 "dma_device_type": 1 00:16:22.555 }, 00:16:22.555 { 00:16:22.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.555 "dma_device_type": 2 00:16:22.555 }, 00:16:22.555 { 00:16:22.555 "dma_device_id": "system", 00:16:22.555 "dma_device_type": 1 00:16:22.555 }, 00:16:22.555 { 00:16:22.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.555 "dma_device_type": 2 00:16:22.555 }, 00:16:22.555 { 00:16:22.555 "dma_device_id": "system", 00:16:22.555 "dma_device_type": 1 00:16:22.555 }, 00:16:22.555 { 00:16:22.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.555 "dma_device_type": 2 00:16:22.555 } 00:16:22.555 ], 00:16:22.555 "driver_specific": { 00:16:22.555 "raid": { 00:16:22.555 "uuid": "94d83a3b-42d0-11ef-96ac-773515fba644", 00:16:22.555 "strip_size_kb": 0, 00:16:22.555 "state": "online", 00:16:22.555 "raid_level": "raid1", 00:16:22.555 "superblock": true, 00:16:22.555 "num_base_bdevs": 4, 00:16:22.555 "num_base_bdevs_discovered": 4, 00:16:22.555 "num_base_bdevs_operational": 4, 00:16:22.555 "base_bdevs_list": [ 00:16:22.555 { 00:16:22.555 "name": "NewBaseBdev", 00:16:22.555 "uuid": "95fac2fd-42d0-11ef-96ac-773515fba644", 00:16:22.555 "is_configured": true, 00:16:22.555 "data_offset": 2048, 00:16:22.555 "data_size": 63488 00:16:22.555 }, 00:16:22.555 { 00:16:22.555 "name": "BaseBdev2", 00:16:22.555 "uuid": "93813536-42d0-11ef-96ac-773515fba644", 00:16:22.555 "is_configured": true, 00:16:22.555 "data_offset": 2048, 00:16:22.555 "data_size": 63488 00:16:22.555 }, 00:16:22.555 { 00:16:22.555 "name": "BaseBdev3", 00:16:22.555 "uuid": "93f5c94b-42d0-11ef-96ac-773515fba644", 00:16:22.555 "is_configured": true, 00:16:22.555 "data_offset": 2048, 00:16:22.555 "data_size": 63488 00:16:22.555 }, 00:16:22.555 { 00:16:22.555 "name": "BaseBdev4", 00:16:22.555 "uuid": "94674fb2-42d0-11ef-96ac-773515fba644", 00:16:22.555 "is_configured": true, 00:16:22.555 "data_offset": 2048, 00:16:22.555 "data_size": 63488 00:16:22.555 } 00:16:22.555 ] 00:16:22.555 } 00:16:22.555 } 00:16:22.555 }' 00:16:22.555 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:22.555 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:16:22.555 BaseBdev2 00:16:22.555 BaseBdev3 00:16:22.555 BaseBdev4' 00:16:22.555 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:22.555 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:22.555 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:22.812 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:22.812 "name": "NewBaseBdev", 00:16:22.812 "aliases": [ 00:16:22.813 "95fac2fd-42d0-11ef-96ac-773515fba644" 00:16:22.813 ], 00:16:22.813 "product_name": "Malloc disk", 00:16:22.813 "block_size": 512, 00:16:22.813 "num_blocks": 65536, 00:16:22.813 "uuid": "95fac2fd-42d0-11ef-96ac-773515fba644", 00:16:22.813 "assigned_rate_limits": { 00:16:22.813 "rw_ios_per_sec": 0, 00:16:22.813 "rw_mbytes_per_sec": 0, 00:16:22.813 "r_mbytes_per_sec": 0, 00:16:22.813 "w_mbytes_per_sec": 0 00:16:22.813 }, 00:16:22.813 "claimed": true, 00:16:22.813 "claim_type": "exclusive_write", 00:16:22.813 "zoned": false, 00:16:22.813 "supported_io_types": { 00:16:22.813 "read": true, 00:16:22.813 "write": true, 00:16:22.813 "unmap": true, 00:16:22.813 "flush": true, 00:16:22.813 "reset": true, 00:16:22.813 "nvme_admin": false, 00:16:22.813 "nvme_io": false, 00:16:22.813 "nvme_io_md": false, 00:16:22.813 "write_zeroes": true, 00:16:22.813 "zcopy": true, 00:16:22.813 "get_zone_info": false, 00:16:22.813 "zone_management": false, 00:16:22.813 "zone_append": false, 00:16:22.813 "compare": false, 00:16:22.813 "compare_and_write": false, 00:16:22.813 "abort": true, 00:16:22.813 "seek_hole": false, 00:16:22.813 "seek_data": false, 00:16:22.813 "copy": true, 00:16:22.813 "nvme_iov_md": false 00:16:22.813 }, 00:16:22.813 "memory_domains": [ 00:16:22.813 { 00:16:22.813 "dma_device_id": "system", 00:16:22.813 "dma_device_type": 1 00:16:22.813 }, 00:16:22.813 { 00:16:22.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:22.813 "dma_device_type": 2 00:16:22.813 } 00:16:22.813 ], 00:16:22.813 "driver_specific": {} 00:16:22.813 }' 00:16:22.813 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:22.813 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:22.813 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:22.813 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:22.813 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:22.813 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:22.813 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:22.813 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:23.072 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:23.072 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:23.072 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:23.072 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:23.072 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:23.072 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:23.072 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:23.072 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:23.072 "name": "BaseBdev2", 00:16:23.072 "aliases": [ 00:16:23.072 "93813536-42d0-11ef-96ac-773515fba644" 00:16:23.072 ], 00:16:23.072 "product_name": "Malloc disk", 00:16:23.072 "block_size": 512, 00:16:23.072 "num_blocks": 65536, 00:16:23.072 "uuid": "93813536-42d0-11ef-96ac-773515fba644", 00:16:23.072 "assigned_rate_limits": { 00:16:23.072 "rw_ios_per_sec": 0, 00:16:23.072 "rw_mbytes_per_sec": 0, 00:16:23.072 "r_mbytes_per_sec": 0, 00:16:23.072 "w_mbytes_per_sec": 0 00:16:23.072 }, 00:16:23.072 "claimed": true, 00:16:23.072 "claim_type": "exclusive_write", 00:16:23.072 "zoned": false, 00:16:23.072 "supported_io_types": { 00:16:23.072 "read": true, 00:16:23.072 "write": true, 00:16:23.072 "unmap": true, 00:16:23.072 "flush": true, 00:16:23.072 "reset": true, 00:16:23.072 "nvme_admin": false, 00:16:23.072 "nvme_io": false, 00:16:23.072 "nvme_io_md": false, 00:16:23.072 "write_zeroes": true, 00:16:23.072 "zcopy": true, 00:16:23.072 "get_zone_info": false, 00:16:23.072 "zone_management": false, 00:16:23.072 "zone_append": false, 00:16:23.072 "compare": false, 00:16:23.072 "compare_and_write": false, 00:16:23.072 "abort": true, 00:16:23.072 "seek_hole": false, 00:16:23.072 "seek_data": false, 00:16:23.072 "copy": true, 00:16:23.072 "nvme_iov_md": false 00:16:23.072 }, 00:16:23.072 "memory_domains": [ 00:16:23.072 { 00:16:23.072 "dma_device_id": "system", 00:16:23.072 "dma_device_type": 1 00:16:23.072 }, 00:16:23.072 { 00:16:23.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.072 "dma_device_type": 2 00:16:23.072 } 00:16:23.072 ], 00:16:23.072 "driver_specific": {} 00:16:23.072 }' 00:16:23.072 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:23.330 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:23.330 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:23.330 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:23.330 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:23.330 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:23.330 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:23.330 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:23.330 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:23.330 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:23.330 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:23.330 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:23.330 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:23.330 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:23.330 17:35:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:23.589 17:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:23.589 "name": "BaseBdev3", 00:16:23.589 "aliases": [ 00:16:23.589 "93f5c94b-42d0-11ef-96ac-773515fba644" 00:16:23.589 ], 00:16:23.589 "product_name": "Malloc disk", 00:16:23.589 "block_size": 512, 00:16:23.589 "num_blocks": 65536, 00:16:23.589 "uuid": "93f5c94b-42d0-11ef-96ac-773515fba644", 00:16:23.589 "assigned_rate_limits": { 00:16:23.589 "rw_ios_per_sec": 0, 00:16:23.589 "rw_mbytes_per_sec": 0, 00:16:23.589 "r_mbytes_per_sec": 0, 00:16:23.589 "w_mbytes_per_sec": 0 00:16:23.589 }, 00:16:23.589 "claimed": true, 00:16:23.589 "claim_type": "exclusive_write", 00:16:23.589 "zoned": false, 00:16:23.589 "supported_io_types": { 00:16:23.589 "read": true, 00:16:23.589 "write": true, 00:16:23.589 "unmap": true, 00:16:23.589 "flush": true, 00:16:23.589 "reset": true, 00:16:23.589 "nvme_admin": false, 00:16:23.589 "nvme_io": false, 00:16:23.589 "nvme_io_md": false, 00:16:23.589 "write_zeroes": true, 00:16:23.589 "zcopy": true, 00:16:23.589 "get_zone_info": false, 00:16:23.589 "zone_management": false, 00:16:23.589 "zone_append": false, 00:16:23.589 "compare": false, 00:16:23.589 "compare_and_write": false, 00:16:23.589 "abort": true, 00:16:23.589 "seek_hole": false, 00:16:23.589 "seek_data": false, 00:16:23.589 "copy": true, 00:16:23.589 "nvme_iov_md": false 00:16:23.589 }, 00:16:23.589 "memory_domains": [ 00:16:23.589 { 00:16:23.589 "dma_device_id": "system", 00:16:23.589 "dma_device_type": 1 00:16:23.589 }, 00:16:23.589 { 00:16:23.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.589 "dma_device_type": 2 00:16:23.589 } 00:16:23.589 ], 00:16:23.589 "driver_specific": {} 00:16:23.589 }' 00:16:23.589 17:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:23.589 17:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:23.589 17:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:23.589 17:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:23.589 17:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:23.589 17:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:23.589 17:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:23.589 17:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:23.589 17:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:23.589 17:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:23.589 17:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:23.589 17:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:23.589 17:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:23.589 17:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:23.589 17:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:23.846 17:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:23.846 "name": "BaseBdev4", 00:16:23.846 "aliases": [ 00:16:23.846 "94674fb2-42d0-11ef-96ac-773515fba644" 00:16:23.846 ], 00:16:23.846 "product_name": "Malloc disk", 00:16:23.846 "block_size": 512, 00:16:23.846 "num_blocks": 65536, 00:16:23.846 "uuid": "94674fb2-42d0-11ef-96ac-773515fba644", 00:16:23.846 "assigned_rate_limits": { 00:16:23.846 "rw_ios_per_sec": 0, 00:16:23.846 "rw_mbytes_per_sec": 0, 00:16:23.846 "r_mbytes_per_sec": 0, 00:16:23.846 "w_mbytes_per_sec": 0 00:16:23.846 }, 00:16:23.846 "claimed": true, 00:16:23.846 "claim_type": "exclusive_write", 00:16:23.846 "zoned": false, 00:16:23.846 "supported_io_types": { 00:16:23.846 "read": true, 00:16:23.846 "write": true, 00:16:23.846 "unmap": true, 00:16:23.846 "flush": true, 00:16:23.846 "reset": true, 00:16:23.846 "nvme_admin": false, 00:16:23.846 "nvme_io": false, 00:16:23.846 "nvme_io_md": false, 00:16:23.846 "write_zeroes": true, 00:16:23.846 "zcopy": true, 00:16:23.846 "get_zone_info": false, 00:16:23.846 "zone_management": false, 00:16:23.846 "zone_append": false, 00:16:23.846 "compare": false, 00:16:23.846 "compare_and_write": false, 00:16:23.846 "abort": true, 00:16:23.846 "seek_hole": false, 00:16:23.846 "seek_data": false, 00:16:23.846 "copy": true, 00:16:23.846 "nvme_iov_md": false 00:16:23.846 }, 00:16:23.846 "memory_domains": [ 00:16:23.846 { 00:16:23.846 "dma_device_id": "system", 00:16:23.846 "dma_device_type": 1 00:16:23.846 }, 00:16:23.846 { 00:16:23.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:23.846 "dma_device_type": 2 00:16:23.846 } 00:16:23.846 ], 00:16:23.846 "driver_specific": {} 00:16:23.846 }' 00:16:23.846 17:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:23.846 17:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:23.846 17:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:23.846 17:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:23.846 17:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:23.846 17:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:23.846 17:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:23.846 17:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:23.846 17:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:23.846 17:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:23.846 17:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:23.846 17:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:23.846 17:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:24.104 [2024-07-15 17:35:19.814465] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:24.104 [2024-07-15 17:35:19.814494] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:24.104 [2024-07-15 17:35:19.814518] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:24.104 [2024-07-15 17:35:19.814585] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:24.104 [2024-07-15 17:35:19.814590] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10e52f634f00 name Existed_Raid, state offline 00:16:24.104 17:35:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 63810 00:16:24.104 17:35:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 63810 ']' 00:16:24.104 17:35:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 63810 00:16:24.104 17:35:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:16:24.104 17:35:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:16:24.104 17:35:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 63810 00:16:24.104 17:35:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:16:24.104 17:35:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:16:24.104 17:35:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:16:24.104 killing process with pid 63810 00:16:24.104 17:35:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63810' 00:16:24.104 17:35:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 63810 00:16:24.104 [2024-07-15 17:35:19.840712] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:24.104 17:35:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 63810 00:16:24.104 [2024-07-15 17:35:19.863772] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:24.362 17:35:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:16:24.362 00:16:24.362 real 0m26.426s 00:16:24.362 user 0m48.304s 00:16:24.362 sys 0m3.663s 00:16:24.362 17:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:24.362 ************************************ 00:16:24.362 17:35:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.362 END TEST raid_state_function_test_sb 00:16:24.362 ************************************ 00:16:24.362 17:35:20 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:24.362 17:35:20 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:16:24.362 17:35:20 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:24.362 17:35:20 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:24.362 17:35:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:24.362 ************************************ 00:16:24.362 START TEST raid_superblock_test 00:16:24.362 ************************************ 00:16:24.362 17:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 4 00:16:24.362 17:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:16:24.362 17:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:16:24.362 17:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:16:24.362 17:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:16:24.362 17:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:16:24.362 17:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:16:24.362 17:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:16:24.362 17:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:16:24.362 17:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:16:24.362 17:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:16:24.362 17:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:16:24.362 17:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:16:24.362 17:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:16:24.362 17:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:16:24.362 17:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:16:24.362 17:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=64624 00:16:24.362 17:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 64624 /var/tmp/spdk-raid.sock 00:16:24.362 17:35:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:24.362 17:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 64624 ']' 00:16:24.362 17:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:24.362 17:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:24.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:24.362 17:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:24.362 17:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:24.362 17:35:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.362 [2024-07-15 17:35:20.097373] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:16:24.362 [2024-07-15 17:35:20.097509] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:24.927 EAL: TSC is not safe to use in SMP mode 00:16:24.927 EAL: TSC is not invariant 00:16:24.927 [2024-07-15 17:35:20.635773] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:24.927 [2024-07-15 17:35:20.733493] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:24.927 [2024-07-15 17:35:20.735944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:24.927 [2024-07-15 17:35:20.736932] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:24.927 [2024-07-15 17:35:20.736947] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:25.495 17:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:25.495 17:35:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:16:25.495 17:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:16:25.495 17:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:25.495 17:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:16:25.495 17:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:16:25.495 17:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:25.495 17:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:25.495 17:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:25.495 17:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:25.495 17:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:25.753 malloc1 00:16:25.753 17:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:26.012 [2024-07-15 17:35:21.646496] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:26.012 [2024-07-15 17:35:21.646601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.012 [2024-07-15 17:35:21.646614] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25d56434780 00:16:26.012 [2024-07-15 17:35:21.646623] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.012 [2024-07-15 17:35:21.647559] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.012 [2024-07-15 17:35:21.647586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:26.012 pt1 00:16:26.012 17:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:26.012 17:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:26.012 17:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:16:26.012 17:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:16:26.012 17:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:26.012 17:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:26.012 17:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:26.012 17:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:26.012 17:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:26.269 malloc2 00:16:26.270 17:35:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:26.527 [2024-07-15 17:35:22.206621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:26.527 [2024-07-15 17:35:22.206696] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:26.527 [2024-07-15 17:35:22.206724] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25d56434c80 00:16:26.527 [2024-07-15 17:35:22.206732] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:26.527 [2024-07-15 17:35:22.207443] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:26.527 [2024-07-15 17:35:22.207479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:26.527 pt2 00:16:26.527 17:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:26.527 17:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:26.527 17:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:16:26.527 17:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:16:26.527 17:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:26.528 17:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:26.528 17:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:26.528 17:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:26.528 17:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:26.787 malloc3 00:16:26.787 17:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:27.046 [2024-07-15 17:35:22.722668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:27.046 [2024-07-15 17:35:22.722725] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.046 [2024-07-15 17:35:22.722738] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25d56435180 00:16:27.046 [2024-07-15 17:35:22.722746] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.046 [2024-07-15 17:35:22.723431] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.046 [2024-07-15 17:35:22.723458] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:27.046 pt3 00:16:27.046 17:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:27.046 17:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:27.046 17:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:16:27.046 17:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:16:27.046 17:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:27.046 17:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:27.046 17:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:27.046 17:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:27.046 17:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:16:27.304 malloc4 00:16:27.304 17:35:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:27.569 [2024-07-15 17:35:23.186678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:27.569 [2024-07-15 17:35:23.186743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:27.569 [2024-07-15 17:35:23.186768] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25d56435680 00:16:27.569 [2024-07-15 17:35:23.186776] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:27.569 [2024-07-15 17:35:23.187484] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:27.569 [2024-07-15 17:35:23.187512] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:27.569 pt4 00:16:27.569 17:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:27.569 17:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:27.569 17:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:16:27.826 [2024-07-15 17:35:23.430709] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:27.826 [2024-07-15 17:35:23.431314] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:27.826 [2024-07-15 17:35:23.431339] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:27.826 [2024-07-15 17:35:23.431351] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:27.826 [2024-07-15 17:35:23.431406] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x25d56435900 00:16:27.826 [2024-07-15 17:35:23.431413] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:27.826 [2024-07-15 17:35:23.431446] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25d56497e20 00:16:27.826 [2024-07-15 17:35:23.431524] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x25d56435900 00:16:27.826 [2024-07-15 17:35:23.431529] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x25d56435900 00:16:27.826 [2024-07-15 17:35:23.431556] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.826 17:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:27.826 17:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:27.826 17:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:27.826 17:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:27.826 17:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:27.826 17:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:27.826 17:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:27.826 17:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:27.826 17:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:27.826 17:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:27.826 17:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.826 17:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.084 17:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:28.084 "name": "raid_bdev1", 00:16:28.085 "uuid": "9d7837b4-42d0-11ef-96ac-773515fba644", 00:16:28.085 "strip_size_kb": 0, 00:16:28.085 "state": "online", 00:16:28.085 "raid_level": "raid1", 00:16:28.085 "superblock": true, 00:16:28.085 "num_base_bdevs": 4, 00:16:28.085 "num_base_bdevs_discovered": 4, 00:16:28.085 "num_base_bdevs_operational": 4, 00:16:28.085 "base_bdevs_list": [ 00:16:28.085 { 00:16:28.085 "name": "pt1", 00:16:28.085 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:28.085 "is_configured": true, 00:16:28.085 "data_offset": 2048, 00:16:28.085 "data_size": 63488 00:16:28.085 }, 00:16:28.085 { 00:16:28.085 "name": "pt2", 00:16:28.085 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:28.085 "is_configured": true, 00:16:28.085 "data_offset": 2048, 00:16:28.085 "data_size": 63488 00:16:28.085 }, 00:16:28.085 { 00:16:28.085 "name": "pt3", 00:16:28.085 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:28.085 "is_configured": true, 00:16:28.085 "data_offset": 2048, 00:16:28.085 "data_size": 63488 00:16:28.085 }, 00:16:28.085 { 00:16:28.085 "name": "pt4", 00:16:28.085 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:28.085 "is_configured": true, 00:16:28.085 "data_offset": 2048, 00:16:28.085 "data_size": 63488 00:16:28.085 } 00:16:28.085 ] 00:16:28.085 }' 00:16:28.085 17:35:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:28.085 17:35:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.342 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:16:28.342 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:28.342 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:28.342 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:28.342 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:28.342 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:28.342 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:28.342 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:28.600 [2024-07-15 17:35:24.318761] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:28.600 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:28.600 "name": "raid_bdev1", 00:16:28.600 "aliases": [ 00:16:28.600 "9d7837b4-42d0-11ef-96ac-773515fba644" 00:16:28.600 ], 00:16:28.600 "product_name": "Raid Volume", 00:16:28.600 "block_size": 512, 00:16:28.600 "num_blocks": 63488, 00:16:28.600 "uuid": "9d7837b4-42d0-11ef-96ac-773515fba644", 00:16:28.600 "assigned_rate_limits": { 00:16:28.600 "rw_ios_per_sec": 0, 00:16:28.600 "rw_mbytes_per_sec": 0, 00:16:28.600 "r_mbytes_per_sec": 0, 00:16:28.600 "w_mbytes_per_sec": 0 00:16:28.600 }, 00:16:28.600 "claimed": false, 00:16:28.600 "zoned": false, 00:16:28.600 "supported_io_types": { 00:16:28.600 "read": true, 00:16:28.600 "write": true, 00:16:28.600 "unmap": false, 00:16:28.600 "flush": false, 00:16:28.600 "reset": true, 00:16:28.600 "nvme_admin": false, 00:16:28.600 "nvme_io": false, 00:16:28.600 "nvme_io_md": false, 00:16:28.600 "write_zeroes": true, 00:16:28.600 "zcopy": false, 00:16:28.600 "get_zone_info": false, 00:16:28.600 "zone_management": false, 00:16:28.600 "zone_append": false, 00:16:28.600 "compare": false, 00:16:28.600 "compare_and_write": false, 00:16:28.600 "abort": false, 00:16:28.600 "seek_hole": false, 00:16:28.600 "seek_data": false, 00:16:28.600 "copy": false, 00:16:28.600 "nvme_iov_md": false 00:16:28.600 }, 00:16:28.600 "memory_domains": [ 00:16:28.600 { 00:16:28.600 "dma_device_id": "system", 00:16:28.600 "dma_device_type": 1 00:16:28.600 }, 00:16:28.600 { 00:16:28.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.600 "dma_device_type": 2 00:16:28.601 }, 00:16:28.601 { 00:16:28.601 "dma_device_id": "system", 00:16:28.601 "dma_device_type": 1 00:16:28.601 }, 00:16:28.601 { 00:16:28.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.601 "dma_device_type": 2 00:16:28.601 }, 00:16:28.601 { 00:16:28.601 "dma_device_id": "system", 00:16:28.601 "dma_device_type": 1 00:16:28.601 }, 00:16:28.601 { 00:16:28.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.601 "dma_device_type": 2 00:16:28.601 }, 00:16:28.601 { 00:16:28.601 "dma_device_id": "system", 00:16:28.601 "dma_device_type": 1 00:16:28.601 }, 00:16:28.601 { 00:16:28.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.601 "dma_device_type": 2 00:16:28.601 } 00:16:28.601 ], 00:16:28.601 "driver_specific": { 00:16:28.601 "raid": { 00:16:28.601 "uuid": "9d7837b4-42d0-11ef-96ac-773515fba644", 00:16:28.601 "strip_size_kb": 0, 00:16:28.601 "state": "online", 00:16:28.601 "raid_level": "raid1", 00:16:28.601 "superblock": true, 00:16:28.601 "num_base_bdevs": 4, 00:16:28.601 "num_base_bdevs_discovered": 4, 00:16:28.601 "num_base_bdevs_operational": 4, 00:16:28.601 "base_bdevs_list": [ 00:16:28.601 { 00:16:28.601 "name": "pt1", 00:16:28.601 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:28.601 "is_configured": true, 00:16:28.601 "data_offset": 2048, 00:16:28.601 "data_size": 63488 00:16:28.601 }, 00:16:28.601 { 00:16:28.601 "name": "pt2", 00:16:28.601 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:28.601 "is_configured": true, 00:16:28.601 "data_offset": 2048, 00:16:28.601 "data_size": 63488 00:16:28.601 }, 00:16:28.601 { 00:16:28.601 "name": "pt3", 00:16:28.601 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:28.601 "is_configured": true, 00:16:28.601 "data_offset": 2048, 00:16:28.601 "data_size": 63488 00:16:28.601 }, 00:16:28.601 { 00:16:28.601 "name": "pt4", 00:16:28.601 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:28.601 "is_configured": true, 00:16:28.601 "data_offset": 2048, 00:16:28.601 "data_size": 63488 00:16:28.601 } 00:16:28.601 ] 00:16:28.601 } 00:16:28.601 } 00:16:28.601 }' 00:16:28.601 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:28.601 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:28.601 pt2 00:16:28.601 pt3 00:16:28.601 pt4' 00:16:28.601 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:28.601 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:28.601 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:28.859 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:28.859 "name": "pt1", 00:16:28.859 "aliases": [ 00:16:28.859 "00000000-0000-0000-0000-000000000001" 00:16:28.859 ], 00:16:28.859 "product_name": "passthru", 00:16:28.859 "block_size": 512, 00:16:28.859 "num_blocks": 65536, 00:16:28.859 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:28.859 "assigned_rate_limits": { 00:16:28.859 "rw_ios_per_sec": 0, 00:16:28.859 "rw_mbytes_per_sec": 0, 00:16:28.859 "r_mbytes_per_sec": 0, 00:16:28.859 "w_mbytes_per_sec": 0 00:16:28.859 }, 00:16:28.859 "claimed": true, 00:16:28.859 "claim_type": "exclusive_write", 00:16:28.859 "zoned": false, 00:16:28.859 "supported_io_types": { 00:16:28.859 "read": true, 00:16:28.859 "write": true, 00:16:28.859 "unmap": true, 00:16:28.859 "flush": true, 00:16:28.859 "reset": true, 00:16:28.859 "nvme_admin": false, 00:16:28.859 "nvme_io": false, 00:16:28.859 "nvme_io_md": false, 00:16:28.859 "write_zeroes": true, 00:16:28.859 "zcopy": true, 00:16:28.859 "get_zone_info": false, 00:16:28.859 "zone_management": false, 00:16:28.859 "zone_append": false, 00:16:28.859 "compare": false, 00:16:28.859 "compare_and_write": false, 00:16:28.859 "abort": true, 00:16:28.859 "seek_hole": false, 00:16:28.859 "seek_data": false, 00:16:28.859 "copy": true, 00:16:28.859 "nvme_iov_md": false 00:16:28.859 }, 00:16:28.859 "memory_domains": [ 00:16:28.859 { 00:16:28.859 "dma_device_id": "system", 00:16:28.859 "dma_device_type": 1 00:16:28.859 }, 00:16:28.859 { 00:16:28.859 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:28.859 "dma_device_type": 2 00:16:28.859 } 00:16:28.859 ], 00:16:28.859 "driver_specific": { 00:16:28.859 "passthru": { 00:16:28.859 "name": "pt1", 00:16:28.859 "base_bdev_name": "malloc1" 00:16:28.859 } 00:16:28.859 } 00:16:28.859 }' 00:16:28.859 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:28.859 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:28.859 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:28.859 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:28.859 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:28.859 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:28.859 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:28.859 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:28.859 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:28.859 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:28.859 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:28.859 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:28.859 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:28.859 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:28.859 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:29.429 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:29.429 "name": "pt2", 00:16:29.429 "aliases": [ 00:16:29.429 "00000000-0000-0000-0000-000000000002" 00:16:29.429 ], 00:16:29.429 "product_name": "passthru", 00:16:29.429 "block_size": 512, 00:16:29.429 "num_blocks": 65536, 00:16:29.429 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:29.429 "assigned_rate_limits": { 00:16:29.429 "rw_ios_per_sec": 0, 00:16:29.429 "rw_mbytes_per_sec": 0, 00:16:29.429 "r_mbytes_per_sec": 0, 00:16:29.429 "w_mbytes_per_sec": 0 00:16:29.429 }, 00:16:29.429 "claimed": true, 00:16:29.429 "claim_type": "exclusive_write", 00:16:29.429 "zoned": false, 00:16:29.429 "supported_io_types": { 00:16:29.429 "read": true, 00:16:29.429 "write": true, 00:16:29.429 "unmap": true, 00:16:29.429 "flush": true, 00:16:29.429 "reset": true, 00:16:29.429 "nvme_admin": false, 00:16:29.429 "nvme_io": false, 00:16:29.429 "nvme_io_md": false, 00:16:29.429 "write_zeroes": true, 00:16:29.429 "zcopy": true, 00:16:29.429 "get_zone_info": false, 00:16:29.429 "zone_management": false, 00:16:29.429 "zone_append": false, 00:16:29.429 "compare": false, 00:16:29.429 "compare_and_write": false, 00:16:29.429 "abort": true, 00:16:29.429 "seek_hole": false, 00:16:29.429 "seek_data": false, 00:16:29.429 "copy": true, 00:16:29.429 "nvme_iov_md": false 00:16:29.429 }, 00:16:29.429 "memory_domains": [ 00:16:29.429 { 00:16:29.429 "dma_device_id": "system", 00:16:29.429 "dma_device_type": 1 00:16:29.429 }, 00:16:29.429 { 00:16:29.429 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.429 "dma_device_type": 2 00:16:29.429 } 00:16:29.429 ], 00:16:29.429 "driver_specific": { 00:16:29.429 "passthru": { 00:16:29.429 "name": "pt2", 00:16:29.429 "base_bdev_name": "malloc2" 00:16:29.429 } 00:16:29.429 } 00:16:29.429 }' 00:16:29.429 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:29.429 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:29.429 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:29.429 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:29.429 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:29.429 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:29.429 17:35:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:29.429 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:29.429 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:29.429 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:29.429 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:29.429 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:29.429 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:29.429 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:29.429 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:29.686 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:29.686 "name": "pt3", 00:16:29.686 "aliases": [ 00:16:29.686 "00000000-0000-0000-0000-000000000003" 00:16:29.686 ], 00:16:29.686 "product_name": "passthru", 00:16:29.686 "block_size": 512, 00:16:29.686 "num_blocks": 65536, 00:16:29.686 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:29.686 "assigned_rate_limits": { 00:16:29.686 "rw_ios_per_sec": 0, 00:16:29.686 "rw_mbytes_per_sec": 0, 00:16:29.686 "r_mbytes_per_sec": 0, 00:16:29.686 "w_mbytes_per_sec": 0 00:16:29.686 }, 00:16:29.686 "claimed": true, 00:16:29.686 "claim_type": "exclusive_write", 00:16:29.686 "zoned": false, 00:16:29.686 "supported_io_types": { 00:16:29.686 "read": true, 00:16:29.686 "write": true, 00:16:29.686 "unmap": true, 00:16:29.686 "flush": true, 00:16:29.686 "reset": true, 00:16:29.686 "nvme_admin": false, 00:16:29.686 "nvme_io": false, 00:16:29.686 "nvme_io_md": false, 00:16:29.686 "write_zeroes": true, 00:16:29.686 "zcopy": true, 00:16:29.686 "get_zone_info": false, 00:16:29.686 "zone_management": false, 00:16:29.686 "zone_append": false, 00:16:29.686 "compare": false, 00:16:29.686 "compare_and_write": false, 00:16:29.686 "abort": true, 00:16:29.686 "seek_hole": false, 00:16:29.686 "seek_data": false, 00:16:29.686 "copy": true, 00:16:29.686 "nvme_iov_md": false 00:16:29.686 }, 00:16:29.686 "memory_domains": [ 00:16:29.686 { 00:16:29.686 "dma_device_id": "system", 00:16:29.686 "dma_device_type": 1 00:16:29.686 }, 00:16:29.687 { 00:16:29.687 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.687 "dma_device_type": 2 00:16:29.687 } 00:16:29.687 ], 00:16:29.687 "driver_specific": { 00:16:29.687 "passthru": { 00:16:29.687 "name": "pt3", 00:16:29.687 "base_bdev_name": "malloc3" 00:16:29.687 } 00:16:29.687 } 00:16:29.687 }' 00:16:29.687 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:29.687 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:29.687 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:29.687 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:29.687 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:29.687 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:29.687 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:29.687 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:29.687 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:29.687 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:29.687 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:29.687 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:29.687 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:29.687 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:16:29.687 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:29.945 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:29.945 "name": "pt4", 00:16:29.945 "aliases": [ 00:16:29.945 "00000000-0000-0000-0000-000000000004" 00:16:29.945 ], 00:16:29.945 "product_name": "passthru", 00:16:29.945 "block_size": 512, 00:16:29.945 "num_blocks": 65536, 00:16:29.945 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:29.945 "assigned_rate_limits": { 00:16:29.945 "rw_ios_per_sec": 0, 00:16:29.945 "rw_mbytes_per_sec": 0, 00:16:29.945 "r_mbytes_per_sec": 0, 00:16:29.945 "w_mbytes_per_sec": 0 00:16:29.945 }, 00:16:29.945 "claimed": true, 00:16:29.945 "claim_type": "exclusive_write", 00:16:29.945 "zoned": false, 00:16:29.945 "supported_io_types": { 00:16:29.945 "read": true, 00:16:29.945 "write": true, 00:16:29.945 "unmap": true, 00:16:29.945 "flush": true, 00:16:29.945 "reset": true, 00:16:29.945 "nvme_admin": false, 00:16:29.945 "nvme_io": false, 00:16:29.945 "nvme_io_md": false, 00:16:29.945 "write_zeroes": true, 00:16:29.945 "zcopy": true, 00:16:29.945 "get_zone_info": false, 00:16:29.945 "zone_management": false, 00:16:29.945 "zone_append": false, 00:16:29.945 "compare": false, 00:16:29.945 "compare_and_write": false, 00:16:29.945 "abort": true, 00:16:29.945 "seek_hole": false, 00:16:29.945 "seek_data": false, 00:16:29.945 "copy": true, 00:16:29.945 "nvme_iov_md": false 00:16:29.945 }, 00:16:29.945 "memory_domains": [ 00:16:29.945 { 00:16:29.945 "dma_device_id": "system", 00:16:29.945 "dma_device_type": 1 00:16:29.945 }, 00:16:29.945 { 00:16:29.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:29.945 "dma_device_type": 2 00:16:29.945 } 00:16:29.945 ], 00:16:29.945 "driver_specific": { 00:16:29.945 "passthru": { 00:16:29.945 "name": "pt4", 00:16:29.945 "base_bdev_name": "malloc4" 00:16:29.945 } 00:16:29.945 } 00:16:29.945 }' 00:16:29.945 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:29.945 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:29.945 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:29.945 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:29.945 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:29.945 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:29.945 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:29.945 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:29.945 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:29.945 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:29.945 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:29.945 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:29.945 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:29.945 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:16:30.204 [2024-07-15 17:35:25.946795] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:30.204 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=9d7837b4-42d0-11ef-96ac-773515fba644 00:16:30.204 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 9d7837b4-42d0-11ef-96ac-773515fba644 ']' 00:16:30.204 17:35:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:30.463 [2024-07-15 17:35:26.238747] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:30.463 [2024-07-15 17:35:26.238779] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:30.463 [2024-07-15 17:35:26.238802] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:30.463 [2024-07-15 17:35:26.238821] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:30.463 [2024-07-15 17:35:26.238825] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25d56435900 name raid_bdev1, state offline 00:16:30.463 17:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:30.463 17:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:16:30.721 17:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:16:30.721 17:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:16:30.721 17:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:30.721 17:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:31.287 17:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:31.287 17:35:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:31.287 17:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:31.287 17:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:31.545 17:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:31.545 17:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:16:31.804 17:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:31.804 17:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:32.063 17:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:16:32.063 17:35:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:32.063 17:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:16:32.063 17:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:32.063 17:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:32.063 17:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:32.063 17:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:32.063 17:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:32.063 17:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:32.063 17:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:32.063 17:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:32.063 17:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:32.063 17:35:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:32.322 [2024-07-15 17:35:28.034796] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:32.322 [2024-07-15 17:35:28.035394] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:32.322 [2024-07-15 17:35:28.035415] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:32.322 [2024-07-15 17:35:28.035429] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:32.322 [2024-07-15 17:35:28.035452] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:32.322 [2024-07-15 17:35:28.035496] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:32.322 [2024-07-15 17:35:28.035508] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:32.322 [2024-07-15 17:35:28.035517] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:32.322 [2024-07-15 17:35:28.035526] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:32.322 [2024-07-15 17:35:28.035530] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25d56435680 name raid_bdev1, state configuring 00:16:32.322 request: 00:16:32.322 { 00:16:32.322 "name": "raid_bdev1", 00:16:32.322 "raid_level": "raid1", 00:16:32.322 "base_bdevs": [ 00:16:32.322 "malloc1", 00:16:32.322 "malloc2", 00:16:32.322 "malloc3", 00:16:32.322 "malloc4" 00:16:32.322 ], 00:16:32.322 "superblock": false, 00:16:32.322 "method": "bdev_raid_create", 00:16:32.322 "req_id": 1 00:16:32.322 } 00:16:32.322 Got JSON-RPC error response 00:16:32.322 response: 00:16:32.322 { 00:16:32.322 "code": -17, 00:16:32.322 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:32.322 } 00:16:32.322 17:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:16:32.322 17:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:32.322 17:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:32.322 17:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:32.322 17:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.322 17:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:16:32.580 17:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:16:32.580 17:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:16:32.580 17:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:32.839 [2024-07-15 17:35:28.598805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:32.839 [2024-07-15 17:35:28.598864] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.839 [2024-07-15 17:35:28.598877] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25d56435180 00:16:32.839 [2024-07-15 17:35:28.598885] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.839 [2024-07-15 17:35:28.599586] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.839 [2024-07-15 17:35:28.599610] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:32.839 [2024-07-15 17:35:28.599637] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:32.839 [2024-07-15 17:35:28.599649] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:32.839 pt1 00:16:32.839 17:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:16:32.839 17:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:32.839 17:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:32.839 17:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:32.839 17:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:32.839 17:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:32.839 17:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:32.839 17:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:32.839 17:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:32.839 17:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:32.839 17:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.839 17:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.406 17:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:33.406 "name": "raid_bdev1", 00:16:33.406 "uuid": "9d7837b4-42d0-11ef-96ac-773515fba644", 00:16:33.406 "strip_size_kb": 0, 00:16:33.406 "state": "configuring", 00:16:33.406 "raid_level": "raid1", 00:16:33.406 "superblock": true, 00:16:33.406 "num_base_bdevs": 4, 00:16:33.406 "num_base_bdevs_discovered": 1, 00:16:33.406 "num_base_bdevs_operational": 4, 00:16:33.406 "base_bdevs_list": [ 00:16:33.406 { 00:16:33.406 "name": "pt1", 00:16:33.406 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:33.406 "is_configured": true, 00:16:33.406 "data_offset": 2048, 00:16:33.406 "data_size": 63488 00:16:33.406 }, 00:16:33.406 { 00:16:33.406 "name": null, 00:16:33.406 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:33.406 "is_configured": false, 00:16:33.406 "data_offset": 2048, 00:16:33.406 "data_size": 63488 00:16:33.406 }, 00:16:33.406 { 00:16:33.406 "name": null, 00:16:33.406 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:33.406 "is_configured": false, 00:16:33.406 "data_offset": 2048, 00:16:33.406 "data_size": 63488 00:16:33.406 }, 00:16:33.406 { 00:16:33.406 "name": null, 00:16:33.406 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:33.406 "is_configured": false, 00:16:33.406 "data_offset": 2048, 00:16:33.406 "data_size": 63488 00:16:33.406 } 00:16:33.406 ] 00:16:33.406 }' 00:16:33.406 17:35:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:33.406 17:35:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.663 17:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:16:33.664 17:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:33.664 [2024-07-15 17:35:29.482824] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:33.664 [2024-07-15 17:35:29.482886] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:33.664 [2024-07-15 17:35:29.482899] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25d56434780 00:16:33.664 [2024-07-15 17:35:29.482907] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:33.664 [2024-07-15 17:35:29.483026] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:33.664 [2024-07-15 17:35:29.483038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:33.664 [2024-07-15 17:35:29.483061] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:33.664 [2024-07-15 17:35:29.483070] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:33.664 pt2 00:16:33.922 17:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:33.922 [2024-07-15 17:35:29.722835] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:33.922 17:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:16:33.922 17:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:33.922 17:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:33.922 17:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:33.922 17:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:33.922 17:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:33.922 17:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:33.922 17:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:33.922 17:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:33.922 17:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:33.922 17:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:33.922 17:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.180 17:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:34.180 "name": "raid_bdev1", 00:16:34.180 "uuid": "9d7837b4-42d0-11ef-96ac-773515fba644", 00:16:34.180 "strip_size_kb": 0, 00:16:34.180 "state": "configuring", 00:16:34.180 "raid_level": "raid1", 00:16:34.180 "superblock": true, 00:16:34.180 "num_base_bdevs": 4, 00:16:34.180 "num_base_bdevs_discovered": 1, 00:16:34.180 "num_base_bdevs_operational": 4, 00:16:34.180 "base_bdevs_list": [ 00:16:34.180 { 00:16:34.180 "name": "pt1", 00:16:34.180 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:34.180 "is_configured": true, 00:16:34.180 "data_offset": 2048, 00:16:34.180 "data_size": 63488 00:16:34.180 }, 00:16:34.180 { 00:16:34.180 "name": null, 00:16:34.180 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:34.180 "is_configured": false, 00:16:34.180 "data_offset": 2048, 00:16:34.180 "data_size": 63488 00:16:34.180 }, 00:16:34.180 { 00:16:34.180 "name": null, 00:16:34.180 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:34.180 "is_configured": false, 00:16:34.180 "data_offset": 2048, 00:16:34.180 "data_size": 63488 00:16:34.180 }, 00:16:34.180 { 00:16:34.180 "name": null, 00:16:34.180 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:34.180 "is_configured": false, 00:16:34.180 "data_offset": 2048, 00:16:34.180 "data_size": 63488 00:16:34.180 } 00:16:34.180 ] 00:16:34.180 }' 00:16:34.180 17:35:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:34.180 17:35:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:34.746 17:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:16:34.746 17:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:34.746 17:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:35.004 [2024-07-15 17:35:30.610846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:35.004 [2024-07-15 17:35:30.610908] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.004 [2024-07-15 17:35:30.610921] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25d56434780 00:16:35.004 [2024-07-15 17:35:30.610930] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.004 [2024-07-15 17:35:30.611046] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.004 [2024-07-15 17:35:30.611057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:35.004 [2024-07-15 17:35:30.611081] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:35.004 [2024-07-15 17:35:30.611090] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:35.004 pt2 00:16:35.004 17:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:35.004 17:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:35.004 17:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:35.262 [2024-07-15 17:35:30.942865] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:35.262 [2024-07-15 17:35:30.942930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.262 [2024-07-15 17:35:30.942957] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25d56435b80 00:16:35.262 [2024-07-15 17:35:30.942965] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.262 [2024-07-15 17:35:30.943072] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.262 [2024-07-15 17:35:30.943083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:35.262 [2024-07-15 17:35:30.943121] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:35.262 [2024-07-15 17:35:30.943130] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:35.262 pt3 00:16:35.262 17:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:35.262 17:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:35.262 17:35:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:35.520 [2024-07-15 17:35:31.226850] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:35.520 [2024-07-15 17:35:31.226888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.520 [2024-07-15 17:35:31.226915] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25d56435900 00:16:35.520 [2024-07-15 17:35:31.226938] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.520 [2024-07-15 17:35:31.227058] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.520 [2024-07-15 17:35:31.227069] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:35.520 [2024-07-15 17:35:31.227090] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:35.520 [2024-07-15 17:35:31.227098] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:35.520 [2024-07-15 17:35:31.227137] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x25d56434c80 00:16:35.520 [2024-07-15 17:35:31.227142] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:35.520 [2024-07-15 17:35:31.227164] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25d56497e20 00:16:35.520 [2024-07-15 17:35:31.227217] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x25d56434c80 00:16:35.520 [2024-07-15 17:35:31.227222] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x25d56434c80 00:16:35.520 [2024-07-15 17:35:31.227243] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.520 pt4 00:16:35.520 17:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:35.520 17:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:35.520 17:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:35.520 17:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:35.520 17:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:35.520 17:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:35.520 17:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:35.520 17:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:35.520 17:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:35.520 17:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:35.520 17:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:35.520 17:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:35.520 17:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:35.520 17:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.779 17:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:35.779 "name": "raid_bdev1", 00:16:35.779 "uuid": "9d7837b4-42d0-11ef-96ac-773515fba644", 00:16:35.779 "strip_size_kb": 0, 00:16:35.779 "state": "online", 00:16:35.779 "raid_level": "raid1", 00:16:35.779 "superblock": true, 00:16:35.779 "num_base_bdevs": 4, 00:16:35.779 "num_base_bdevs_discovered": 4, 00:16:35.779 "num_base_bdevs_operational": 4, 00:16:35.779 "base_bdevs_list": [ 00:16:35.779 { 00:16:35.779 "name": "pt1", 00:16:35.779 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:35.779 "is_configured": true, 00:16:35.779 "data_offset": 2048, 00:16:35.779 "data_size": 63488 00:16:35.779 }, 00:16:35.779 { 00:16:35.779 "name": "pt2", 00:16:35.779 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:35.779 "is_configured": true, 00:16:35.779 "data_offset": 2048, 00:16:35.779 "data_size": 63488 00:16:35.779 }, 00:16:35.779 { 00:16:35.779 "name": "pt3", 00:16:35.779 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:35.779 "is_configured": true, 00:16:35.779 "data_offset": 2048, 00:16:35.779 "data_size": 63488 00:16:35.779 }, 00:16:35.779 { 00:16:35.779 "name": "pt4", 00:16:35.779 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:35.779 "is_configured": true, 00:16:35.779 "data_offset": 2048, 00:16:35.779 "data_size": 63488 00:16:35.779 } 00:16:35.779 ] 00:16:35.779 }' 00:16:35.779 17:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:35.779 17:35:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:36.036 17:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:16:36.036 17:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:36.036 17:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:36.036 17:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:36.036 17:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:36.036 17:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:36.036 17:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:36.036 17:35:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:36.333 [2024-07-15 17:35:32.054909] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:36.333 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:36.333 "name": "raid_bdev1", 00:16:36.333 "aliases": [ 00:16:36.333 "9d7837b4-42d0-11ef-96ac-773515fba644" 00:16:36.333 ], 00:16:36.333 "product_name": "Raid Volume", 00:16:36.333 "block_size": 512, 00:16:36.333 "num_blocks": 63488, 00:16:36.333 "uuid": "9d7837b4-42d0-11ef-96ac-773515fba644", 00:16:36.333 "assigned_rate_limits": { 00:16:36.333 "rw_ios_per_sec": 0, 00:16:36.333 "rw_mbytes_per_sec": 0, 00:16:36.333 "r_mbytes_per_sec": 0, 00:16:36.333 "w_mbytes_per_sec": 0 00:16:36.333 }, 00:16:36.333 "claimed": false, 00:16:36.333 "zoned": false, 00:16:36.334 "supported_io_types": { 00:16:36.334 "read": true, 00:16:36.334 "write": true, 00:16:36.334 "unmap": false, 00:16:36.334 "flush": false, 00:16:36.334 "reset": true, 00:16:36.334 "nvme_admin": false, 00:16:36.334 "nvme_io": false, 00:16:36.334 "nvme_io_md": false, 00:16:36.334 "write_zeroes": true, 00:16:36.334 "zcopy": false, 00:16:36.334 "get_zone_info": false, 00:16:36.334 "zone_management": false, 00:16:36.334 "zone_append": false, 00:16:36.334 "compare": false, 00:16:36.334 "compare_and_write": false, 00:16:36.334 "abort": false, 00:16:36.334 "seek_hole": false, 00:16:36.334 "seek_data": false, 00:16:36.334 "copy": false, 00:16:36.334 "nvme_iov_md": false 00:16:36.334 }, 00:16:36.334 "memory_domains": [ 00:16:36.334 { 00:16:36.334 "dma_device_id": "system", 00:16:36.334 "dma_device_type": 1 00:16:36.334 }, 00:16:36.334 { 00:16:36.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.334 "dma_device_type": 2 00:16:36.334 }, 00:16:36.334 { 00:16:36.334 "dma_device_id": "system", 00:16:36.334 "dma_device_type": 1 00:16:36.334 }, 00:16:36.334 { 00:16:36.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.334 "dma_device_type": 2 00:16:36.334 }, 00:16:36.334 { 00:16:36.334 "dma_device_id": "system", 00:16:36.334 "dma_device_type": 1 00:16:36.334 }, 00:16:36.334 { 00:16:36.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.334 "dma_device_type": 2 00:16:36.334 }, 00:16:36.334 { 00:16:36.334 "dma_device_id": "system", 00:16:36.334 "dma_device_type": 1 00:16:36.334 }, 00:16:36.334 { 00:16:36.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.334 "dma_device_type": 2 00:16:36.334 } 00:16:36.334 ], 00:16:36.334 "driver_specific": { 00:16:36.334 "raid": { 00:16:36.334 "uuid": "9d7837b4-42d0-11ef-96ac-773515fba644", 00:16:36.334 "strip_size_kb": 0, 00:16:36.334 "state": "online", 00:16:36.334 "raid_level": "raid1", 00:16:36.334 "superblock": true, 00:16:36.334 "num_base_bdevs": 4, 00:16:36.334 "num_base_bdevs_discovered": 4, 00:16:36.334 "num_base_bdevs_operational": 4, 00:16:36.334 "base_bdevs_list": [ 00:16:36.334 { 00:16:36.334 "name": "pt1", 00:16:36.334 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:36.334 "is_configured": true, 00:16:36.334 "data_offset": 2048, 00:16:36.334 "data_size": 63488 00:16:36.334 }, 00:16:36.334 { 00:16:36.334 "name": "pt2", 00:16:36.334 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:36.334 "is_configured": true, 00:16:36.334 "data_offset": 2048, 00:16:36.334 "data_size": 63488 00:16:36.334 }, 00:16:36.334 { 00:16:36.334 "name": "pt3", 00:16:36.334 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:36.334 "is_configured": true, 00:16:36.334 "data_offset": 2048, 00:16:36.334 "data_size": 63488 00:16:36.334 }, 00:16:36.334 { 00:16:36.334 "name": "pt4", 00:16:36.334 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:36.334 "is_configured": true, 00:16:36.334 "data_offset": 2048, 00:16:36.334 "data_size": 63488 00:16:36.334 } 00:16:36.334 ] 00:16:36.334 } 00:16:36.334 } 00:16:36.334 }' 00:16:36.334 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:36.334 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:36.334 pt2 00:16:36.334 pt3 00:16:36.334 pt4' 00:16:36.334 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:36.334 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:36.334 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:36.592 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:36.592 "name": "pt1", 00:16:36.592 "aliases": [ 00:16:36.592 "00000000-0000-0000-0000-000000000001" 00:16:36.592 ], 00:16:36.592 "product_name": "passthru", 00:16:36.592 "block_size": 512, 00:16:36.592 "num_blocks": 65536, 00:16:36.592 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:36.593 "assigned_rate_limits": { 00:16:36.593 "rw_ios_per_sec": 0, 00:16:36.593 "rw_mbytes_per_sec": 0, 00:16:36.593 "r_mbytes_per_sec": 0, 00:16:36.593 "w_mbytes_per_sec": 0 00:16:36.593 }, 00:16:36.593 "claimed": true, 00:16:36.593 "claim_type": "exclusive_write", 00:16:36.593 "zoned": false, 00:16:36.593 "supported_io_types": { 00:16:36.593 "read": true, 00:16:36.593 "write": true, 00:16:36.593 "unmap": true, 00:16:36.593 "flush": true, 00:16:36.593 "reset": true, 00:16:36.593 "nvme_admin": false, 00:16:36.593 "nvme_io": false, 00:16:36.593 "nvme_io_md": false, 00:16:36.593 "write_zeroes": true, 00:16:36.593 "zcopy": true, 00:16:36.593 "get_zone_info": false, 00:16:36.593 "zone_management": false, 00:16:36.593 "zone_append": false, 00:16:36.593 "compare": false, 00:16:36.593 "compare_and_write": false, 00:16:36.593 "abort": true, 00:16:36.593 "seek_hole": false, 00:16:36.593 "seek_data": false, 00:16:36.593 "copy": true, 00:16:36.593 "nvme_iov_md": false 00:16:36.593 }, 00:16:36.593 "memory_domains": [ 00:16:36.593 { 00:16:36.593 "dma_device_id": "system", 00:16:36.593 "dma_device_type": 1 00:16:36.593 }, 00:16:36.593 { 00:16:36.593 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.593 "dma_device_type": 2 00:16:36.593 } 00:16:36.593 ], 00:16:36.593 "driver_specific": { 00:16:36.593 "passthru": { 00:16:36.593 "name": "pt1", 00:16:36.593 "base_bdev_name": "malloc1" 00:16:36.593 } 00:16:36.593 } 00:16:36.593 }' 00:16:36.593 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:36.593 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:36.593 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:36.593 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:36.593 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:36.593 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:36.593 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:36.593 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:36.593 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:36.593 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:36.593 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:36.851 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:36.851 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:36.851 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:36.851 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:36.851 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:36.851 "name": "pt2", 00:16:36.851 "aliases": [ 00:16:36.851 "00000000-0000-0000-0000-000000000002" 00:16:36.851 ], 00:16:36.851 "product_name": "passthru", 00:16:36.851 "block_size": 512, 00:16:36.851 "num_blocks": 65536, 00:16:36.851 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:36.851 "assigned_rate_limits": { 00:16:36.851 "rw_ios_per_sec": 0, 00:16:36.851 "rw_mbytes_per_sec": 0, 00:16:36.851 "r_mbytes_per_sec": 0, 00:16:36.851 "w_mbytes_per_sec": 0 00:16:36.851 }, 00:16:36.851 "claimed": true, 00:16:36.851 "claim_type": "exclusive_write", 00:16:36.851 "zoned": false, 00:16:36.851 "supported_io_types": { 00:16:36.851 "read": true, 00:16:36.851 "write": true, 00:16:36.851 "unmap": true, 00:16:36.851 "flush": true, 00:16:36.851 "reset": true, 00:16:36.851 "nvme_admin": false, 00:16:36.851 "nvme_io": false, 00:16:36.851 "nvme_io_md": false, 00:16:36.851 "write_zeroes": true, 00:16:36.851 "zcopy": true, 00:16:36.851 "get_zone_info": false, 00:16:36.851 "zone_management": false, 00:16:36.851 "zone_append": false, 00:16:36.851 "compare": false, 00:16:36.851 "compare_and_write": false, 00:16:36.851 "abort": true, 00:16:36.851 "seek_hole": false, 00:16:36.851 "seek_data": false, 00:16:36.851 "copy": true, 00:16:36.851 "nvme_iov_md": false 00:16:36.851 }, 00:16:36.851 "memory_domains": [ 00:16:36.851 { 00:16:36.851 "dma_device_id": "system", 00:16:36.851 "dma_device_type": 1 00:16:36.851 }, 00:16:36.851 { 00:16:36.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:36.851 "dma_device_type": 2 00:16:36.851 } 00:16:36.851 ], 00:16:36.851 "driver_specific": { 00:16:36.851 "passthru": { 00:16:36.851 "name": "pt2", 00:16:36.851 "base_bdev_name": "malloc2" 00:16:36.851 } 00:16:36.851 } 00:16:36.851 }' 00:16:36.851 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:36.851 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:36.851 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:36.851 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:37.109 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:37.109 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:37.109 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:37.109 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:37.109 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:37.109 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:37.109 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:37.109 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:37.110 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:37.110 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:37.110 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:37.368 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:37.368 "name": "pt3", 00:16:37.368 "aliases": [ 00:16:37.368 "00000000-0000-0000-0000-000000000003" 00:16:37.368 ], 00:16:37.368 "product_name": "passthru", 00:16:37.368 "block_size": 512, 00:16:37.368 "num_blocks": 65536, 00:16:37.368 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:37.368 "assigned_rate_limits": { 00:16:37.368 "rw_ios_per_sec": 0, 00:16:37.368 "rw_mbytes_per_sec": 0, 00:16:37.368 "r_mbytes_per_sec": 0, 00:16:37.368 "w_mbytes_per_sec": 0 00:16:37.368 }, 00:16:37.368 "claimed": true, 00:16:37.368 "claim_type": "exclusive_write", 00:16:37.368 "zoned": false, 00:16:37.368 "supported_io_types": { 00:16:37.368 "read": true, 00:16:37.368 "write": true, 00:16:37.368 "unmap": true, 00:16:37.368 "flush": true, 00:16:37.368 "reset": true, 00:16:37.368 "nvme_admin": false, 00:16:37.368 "nvme_io": false, 00:16:37.368 "nvme_io_md": false, 00:16:37.368 "write_zeroes": true, 00:16:37.368 "zcopy": true, 00:16:37.368 "get_zone_info": false, 00:16:37.368 "zone_management": false, 00:16:37.368 "zone_append": false, 00:16:37.368 "compare": false, 00:16:37.368 "compare_and_write": false, 00:16:37.368 "abort": true, 00:16:37.368 "seek_hole": false, 00:16:37.368 "seek_data": false, 00:16:37.368 "copy": true, 00:16:37.368 "nvme_iov_md": false 00:16:37.368 }, 00:16:37.368 "memory_domains": [ 00:16:37.368 { 00:16:37.368 "dma_device_id": "system", 00:16:37.368 "dma_device_type": 1 00:16:37.368 }, 00:16:37.368 { 00:16:37.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.368 "dma_device_type": 2 00:16:37.368 } 00:16:37.368 ], 00:16:37.368 "driver_specific": { 00:16:37.368 "passthru": { 00:16:37.368 "name": "pt3", 00:16:37.368 "base_bdev_name": "malloc3" 00:16:37.368 } 00:16:37.368 } 00:16:37.368 }' 00:16:37.368 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:37.368 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:37.368 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:37.368 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:37.368 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:37.368 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:37.368 17:35:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:37.368 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:37.368 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:37.368 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:37.368 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:37.368 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:37.368 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:37.368 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:16:37.368 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:37.626 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:37.626 "name": "pt4", 00:16:37.626 "aliases": [ 00:16:37.626 "00000000-0000-0000-0000-000000000004" 00:16:37.626 ], 00:16:37.626 "product_name": "passthru", 00:16:37.626 "block_size": 512, 00:16:37.626 "num_blocks": 65536, 00:16:37.626 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:37.626 "assigned_rate_limits": { 00:16:37.626 "rw_ios_per_sec": 0, 00:16:37.626 "rw_mbytes_per_sec": 0, 00:16:37.626 "r_mbytes_per_sec": 0, 00:16:37.626 "w_mbytes_per_sec": 0 00:16:37.626 }, 00:16:37.626 "claimed": true, 00:16:37.626 "claim_type": "exclusive_write", 00:16:37.626 "zoned": false, 00:16:37.626 "supported_io_types": { 00:16:37.626 "read": true, 00:16:37.626 "write": true, 00:16:37.626 "unmap": true, 00:16:37.626 "flush": true, 00:16:37.626 "reset": true, 00:16:37.626 "nvme_admin": false, 00:16:37.626 "nvme_io": false, 00:16:37.626 "nvme_io_md": false, 00:16:37.626 "write_zeroes": true, 00:16:37.626 "zcopy": true, 00:16:37.626 "get_zone_info": false, 00:16:37.626 "zone_management": false, 00:16:37.626 "zone_append": false, 00:16:37.626 "compare": false, 00:16:37.626 "compare_and_write": false, 00:16:37.626 "abort": true, 00:16:37.626 "seek_hole": false, 00:16:37.626 "seek_data": false, 00:16:37.626 "copy": true, 00:16:37.626 "nvme_iov_md": false 00:16:37.626 }, 00:16:37.626 "memory_domains": [ 00:16:37.626 { 00:16:37.626 "dma_device_id": "system", 00:16:37.626 "dma_device_type": 1 00:16:37.626 }, 00:16:37.626 { 00:16:37.626 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:37.626 "dma_device_type": 2 00:16:37.626 } 00:16:37.626 ], 00:16:37.626 "driver_specific": { 00:16:37.626 "passthru": { 00:16:37.626 "name": "pt4", 00:16:37.626 "base_bdev_name": "malloc4" 00:16:37.626 } 00:16:37.626 } 00:16:37.626 }' 00:16:37.626 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:37.626 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:37.626 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:37.626 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:37.626 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:37.626 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:37.626 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:37.626 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:37.626 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:37.626 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:37.626 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:37.626 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:37.626 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:16:37.626 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:37.884 [2024-07-15 17:35:33.610964] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:37.884 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 9d7837b4-42d0-11ef-96ac-773515fba644 '!=' 9d7837b4-42d0-11ef-96ac-773515fba644 ']' 00:16:37.884 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:16:37.884 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:37.884 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:37.884 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:38.142 [2024-07-15 17:35:33.878929] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:38.142 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:38.142 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:38.142 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:38.142 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:38.142 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:38.142 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:38.142 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:38.142 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:38.142 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:38.142 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:38.142 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:38.142 17:35:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.400 17:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:38.400 "name": "raid_bdev1", 00:16:38.400 "uuid": "9d7837b4-42d0-11ef-96ac-773515fba644", 00:16:38.400 "strip_size_kb": 0, 00:16:38.400 "state": "online", 00:16:38.400 "raid_level": "raid1", 00:16:38.400 "superblock": true, 00:16:38.400 "num_base_bdevs": 4, 00:16:38.400 "num_base_bdevs_discovered": 3, 00:16:38.400 "num_base_bdevs_operational": 3, 00:16:38.400 "base_bdevs_list": [ 00:16:38.400 { 00:16:38.400 "name": null, 00:16:38.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.400 "is_configured": false, 00:16:38.400 "data_offset": 2048, 00:16:38.400 "data_size": 63488 00:16:38.400 }, 00:16:38.400 { 00:16:38.400 "name": "pt2", 00:16:38.400 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:38.400 "is_configured": true, 00:16:38.400 "data_offset": 2048, 00:16:38.400 "data_size": 63488 00:16:38.400 }, 00:16:38.400 { 00:16:38.400 "name": "pt3", 00:16:38.400 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:38.400 "is_configured": true, 00:16:38.400 "data_offset": 2048, 00:16:38.400 "data_size": 63488 00:16:38.400 }, 00:16:38.400 { 00:16:38.400 "name": "pt4", 00:16:38.400 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:38.400 "is_configured": true, 00:16:38.400 "data_offset": 2048, 00:16:38.400 "data_size": 63488 00:16:38.400 } 00:16:38.400 ] 00:16:38.400 }' 00:16:38.400 17:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:38.400 17:35:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:38.964 17:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:38.964 [2024-07-15 17:35:34.782932] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:38.964 [2024-07-15 17:35:34.782958] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:38.964 [2024-07-15 17:35:34.782989] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:38.964 [2024-07-15 17:35:34.783005] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:38.964 [2024-07-15 17:35:34.783010] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25d56434c80 name raid_bdev1, state offline 00:16:39.220 17:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:16:39.220 17:35:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.220 17:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:16:39.220 17:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:16:39.220 17:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:16:39.220 17:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:39.220 17:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:39.478 17:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:16:39.478 17:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:39.478 17:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:39.734 17:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:16:39.734 17:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:39.734 17:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:16:40.302 17:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:16:40.302 17:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:40.302 17:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:16:40.302 17:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:16:40.302 17:35:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:40.302 [2024-07-15 17:35:36.131027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:40.302 [2024-07-15 17:35:36.131078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:40.303 [2024-07-15 17:35:36.131091] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25d56435900 00:16:40.303 [2024-07-15 17:35:36.131100] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:40.303 [2024-07-15 17:35:36.131759] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:40.303 [2024-07-15 17:35:36.131784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:40.303 [2024-07-15 17:35:36.131809] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:40.303 [2024-07-15 17:35:36.131821] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:40.561 pt2 00:16:40.561 17:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:40.561 17:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:40.561 17:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:40.561 17:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:40.561 17:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:40.561 17:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:40.561 17:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:40.561 17:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:40.561 17:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:40.561 17:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:40.561 17:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:40.561 17:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.819 17:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:40.819 "name": "raid_bdev1", 00:16:40.819 "uuid": "9d7837b4-42d0-11ef-96ac-773515fba644", 00:16:40.819 "strip_size_kb": 0, 00:16:40.819 "state": "configuring", 00:16:40.819 "raid_level": "raid1", 00:16:40.819 "superblock": true, 00:16:40.819 "num_base_bdevs": 4, 00:16:40.819 "num_base_bdevs_discovered": 1, 00:16:40.819 "num_base_bdevs_operational": 3, 00:16:40.819 "base_bdevs_list": [ 00:16:40.819 { 00:16:40.819 "name": null, 00:16:40.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.819 "is_configured": false, 00:16:40.819 "data_offset": 2048, 00:16:40.819 "data_size": 63488 00:16:40.819 }, 00:16:40.819 { 00:16:40.819 "name": "pt2", 00:16:40.819 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:40.819 "is_configured": true, 00:16:40.819 "data_offset": 2048, 00:16:40.819 "data_size": 63488 00:16:40.819 }, 00:16:40.819 { 00:16:40.819 "name": null, 00:16:40.819 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:40.819 "is_configured": false, 00:16:40.819 "data_offset": 2048, 00:16:40.819 "data_size": 63488 00:16:40.819 }, 00:16:40.819 { 00:16:40.819 "name": null, 00:16:40.819 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:40.819 "is_configured": false, 00:16:40.819 "data_offset": 2048, 00:16:40.819 "data_size": 63488 00:16:40.819 } 00:16:40.819 ] 00:16:40.819 }' 00:16:40.819 17:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:40.819 17:35:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:41.077 17:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:16:41.077 17:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:16:41.077 17:35:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:41.335 [2024-07-15 17:35:37.051093] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:41.335 [2024-07-15 17:35:37.051159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.335 [2024-07-15 17:35:37.051172] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25d56435680 00:16:41.335 [2024-07-15 17:35:37.051180] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.335 [2024-07-15 17:35:37.051295] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.335 [2024-07-15 17:35:37.051306] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:41.335 [2024-07-15 17:35:37.051329] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:41.335 [2024-07-15 17:35:37.051338] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:41.335 pt3 00:16:41.335 17:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:41.335 17:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:41.335 17:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:41.335 17:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:41.335 17:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:41.335 17:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:41.335 17:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:41.335 17:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:41.335 17:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:41.335 17:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:41.335 17:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:41.335 17:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.593 17:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:41.593 "name": "raid_bdev1", 00:16:41.593 "uuid": "9d7837b4-42d0-11ef-96ac-773515fba644", 00:16:41.593 "strip_size_kb": 0, 00:16:41.593 "state": "configuring", 00:16:41.593 "raid_level": "raid1", 00:16:41.593 "superblock": true, 00:16:41.593 "num_base_bdevs": 4, 00:16:41.593 "num_base_bdevs_discovered": 2, 00:16:41.593 "num_base_bdevs_operational": 3, 00:16:41.593 "base_bdevs_list": [ 00:16:41.593 { 00:16:41.593 "name": null, 00:16:41.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.593 "is_configured": false, 00:16:41.593 "data_offset": 2048, 00:16:41.593 "data_size": 63488 00:16:41.593 }, 00:16:41.593 { 00:16:41.593 "name": "pt2", 00:16:41.593 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:41.593 "is_configured": true, 00:16:41.593 "data_offset": 2048, 00:16:41.593 "data_size": 63488 00:16:41.593 }, 00:16:41.593 { 00:16:41.593 "name": "pt3", 00:16:41.593 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:41.593 "is_configured": true, 00:16:41.593 "data_offset": 2048, 00:16:41.593 "data_size": 63488 00:16:41.593 }, 00:16:41.593 { 00:16:41.593 "name": null, 00:16:41.593 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:41.593 "is_configured": false, 00:16:41.593 "data_offset": 2048, 00:16:41.593 "data_size": 63488 00:16:41.593 } 00:16:41.593 ] 00:16:41.593 }' 00:16:41.593 17:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:41.593 17:35:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.159 17:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:16:42.159 17:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:16:42.159 17:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:16:42.159 17:35:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:42.417 [2024-07-15 17:35:38.007216] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:42.417 [2024-07-15 17:35:38.007270] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.417 [2024-07-15 17:35:38.007283] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25d56434c80 00:16:42.417 [2024-07-15 17:35:38.007291] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.417 [2024-07-15 17:35:38.007404] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.417 [2024-07-15 17:35:38.007415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:42.417 [2024-07-15 17:35:38.007439] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:42.417 [2024-07-15 17:35:38.007447] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:42.417 [2024-07-15 17:35:38.007483] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x25d56434780 00:16:42.417 [2024-07-15 17:35:38.007487] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:42.417 [2024-07-15 17:35:38.007508] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25d56497e20 00:16:42.417 [2024-07-15 17:35:38.007555] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x25d56434780 00:16:42.417 [2024-07-15 17:35:38.007559] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x25d56434780 00:16:42.417 [2024-07-15 17:35:38.007580] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.417 pt4 00:16:42.417 17:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:42.417 17:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:42.417 17:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:42.417 17:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:42.417 17:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:42.417 17:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:42.417 17:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:42.417 17:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:42.417 17:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:42.417 17:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:42.417 17:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:42.417 17:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.675 17:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:42.675 "name": "raid_bdev1", 00:16:42.675 "uuid": "9d7837b4-42d0-11ef-96ac-773515fba644", 00:16:42.675 "strip_size_kb": 0, 00:16:42.675 "state": "online", 00:16:42.675 "raid_level": "raid1", 00:16:42.675 "superblock": true, 00:16:42.675 "num_base_bdevs": 4, 00:16:42.675 "num_base_bdevs_discovered": 3, 00:16:42.675 "num_base_bdevs_operational": 3, 00:16:42.675 "base_bdevs_list": [ 00:16:42.675 { 00:16:42.675 "name": null, 00:16:42.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.675 "is_configured": false, 00:16:42.675 "data_offset": 2048, 00:16:42.675 "data_size": 63488 00:16:42.675 }, 00:16:42.675 { 00:16:42.675 "name": "pt2", 00:16:42.675 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:42.675 "is_configured": true, 00:16:42.675 "data_offset": 2048, 00:16:42.675 "data_size": 63488 00:16:42.675 }, 00:16:42.675 { 00:16:42.675 "name": "pt3", 00:16:42.675 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:42.675 "is_configured": true, 00:16:42.675 "data_offset": 2048, 00:16:42.675 "data_size": 63488 00:16:42.675 }, 00:16:42.675 { 00:16:42.675 "name": "pt4", 00:16:42.675 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:42.675 "is_configured": true, 00:16:42.675 "data_offset": 2048, 00:16:42.675 "data_size": 63488 00:16:42.675 } 00:16:42.675 ] 00:16:42.675 }' 00:16:42.675 17:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:42.675 17:35:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:42.934 17:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:43.191 [2024-07-15 17:35:38.871274] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:43.191 [2024-07-15 17:35:38.871299] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:43.191 [2024-07-15 17:35:38.871322] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:43.191 [2024-07-15 17:35:38.871340] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:43.191 [2024-07-15 17:35:38.871345] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25d56434780 name raid_bdev1, state offline 00:16:43.191 17:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.191 17:35:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:16:43.449 17:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:16:43.449 17:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:16:43.449 17:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:16:43.449 17:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:16:43.449 17:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:16:43.708 17:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:43.966 [2024-07-15 17:35:39.655303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:43.966 [2024-07-15 17:35:39.655357] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.966 [2024-07-15 17:35:39.655370] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25d56434c80 00:16:43.966 [2024-07-15 17:35:39.655378] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.966 [2024-07-15 17:35:39.656012] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.966 [2024-07-15 17:35:39.656038] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:43.966 [2024-07-15 17:35:39.656064] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:43.966 [2024-07-15 17:35:39.656076] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:43.966 [2024-07-15 17:35:39.656106] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:43.966 [2024-07-15 17:35:39.656115] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:43.966 [2024-07-15 17:35:39.656121] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25d56434780 name raid_bdev1, state configuring 00:16:43.966 [2024-07-15 17:35:39.656129] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:43.966 [2024-07-15 17:35:39.656148] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:43.966 pt1 00:16:43.966 17:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:16:43.966 17:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:43.966 17:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:43.966 17:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:43.966 17:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:43.966 17:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:43.966 17:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:43.966 17:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:43.966 17:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:43.966 17:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:43.966 17:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:43.966 17:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:43.966 17:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.225 17:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:44.225 "name": "raid_bdev1", 00:16:44.225 "uuid": "9d7837b4-42d0-11ef-96ac-773515fba644", 00:16:44.225 "strip_size_kb": 0, 00:16:44.225 "state": "configuring", 00:16:44.225 "raid_level": "raid1", 00:16:44.225 "superblock": true, 00:16:44.225 "num_base_bdevs": 4, 00:16:44.225 "num_base_bdevs_discovered": 2, 00:16:44.225 "num_base_bdevs_operational": 3, 00:16:44.225 "base_bdevs_list": [ 00:16:44.225 { 00:16:44.225 "name": null, 00:16:44.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.225 "is_configured": false, 00:16:44.225 "data_offset": 2048, 00:16:44.225 "data_size": 63488 00:16:44.225 }, 00:16:44.225 { 00:16:44.225 "name": "pt2", 00:16:44.225 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:44.225 "is_configured": true, 00:16:44.225 "data_offset": 2048, 00:16:44.225 "data_size": 63488 00:16:44.225 }, 00:16:44.225 { 00:16:44.225 "name": "pt3", 00:16:44.225 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:44.225 "is_configured": true, 00:16:44.225 "data_offset": 2048, 00:16:44.225 "data_size": 63488 00:16:44.225 }, 00:16:44.225 { 00:16:44.225 "name": null, 00:16:44.225 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:44.225 "is_configured": false, 00:16:44.225 "data_offset": 2048, 00:16:44.225 "data_size": 63488 00:16:44.225 } 00:16:44.225 ] 00:16:44.225 }' 00:16:44.225 17:35:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:44.225 17:35:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:44.789 17:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:16:44.789 17:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:45.047 17:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:16:45.047 17:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:45.304 [2024-07-15 17:35:40.923361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:45.304 [2024-07-15 17:35:40.923407] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:45.304 [2024-07-15 17:35:40.923420] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25d56435180 00:16:45.304 [2024-07-15 17:35:40.923427] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:45.304 [2024-07-15 17:35:40.923538] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:45.304 [2024-07-15 17:35:40.923549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:45.304 [2024-07-15 17:35:40.923572] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:45.304 [2024-07-15 17:35:40.923580] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:45.304 [2024-07-15 17:35:40.923609] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x25d56434780 00:16:45.304 [2024-07-15 17:35:40.923613] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:45.304 [2024-07-15 17:35:40.923635] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25d56497e20 00:16:45.304 [2024-07-15 17:35:40.923684] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x25d56434780 00:16:45.304 [2024-07-15 17:35:40.923689] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x25d56434780 00:16:45.304 [2024-07-15 17:35:40.923709] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.304 pt4 00:16:45.304 17:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:45.304 17:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:45.304 17:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:45.304 17:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:45.304 17:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:45.304 17:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:45.304 17:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:45.304 17:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:45.304 17:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:45.304 17:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:45.304 17:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.304 17:35:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.561 17:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:45.561 "name": "raid_bdev1", 00:16:45.561 "uuid": "9d7837b4-42d0-11ef-96ac-773515fba644", 00:16:45.561 "strip_size_kb": 0, 00:16:45.561 "state": "online", 00:16:45.561 "raid_level": "raid1", 00:16:45.561 "superblock": true, 00:16:45.561 "num_base_bdevs": 4, 00:16:45.561 "num_base_bdevs_discovered": 3, 00:16:45.561 "num_base_bdevs_operational": 3, 00:16:45.561 "base_bdevs_list": [ 00:16:45.561 { 00:16:45.561 "name": null, 00:16:45.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.561 "is_configured": false, 00:16:45.561 "data_offset": 2048, 00:16:45.561 "data_size": 63488 00:16:45.561 }, 00:16:45.561 { 00:16:45.561 "name": "pt2", 00:16:45.561 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:45.561 "is_configured": true, 00:16:45.561 "data_offset": 2048, 00:16:45.561 "data_size": 63488 00:16:45.561 }, 00:16:45.561 { 00:16:45.561 "name": "pt3", 00:16:45.561 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:45.561 "is_configured": true, 00:16:45.561 "data_offset": 2048, 00:16:45.561 "data_size": 63488 00:16:45.561 }, 00:16:45.561 { 00:16:45.561 "name": "pt4", 00:16:45.561 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:45.561 "is_configured": true, 00:16:45.561 "data_offset": 2048, 00:16:45.561 "data_size": 63488 00:16:45.561 } 00:16:45.561 ] 00:16:45.561 }' 00:16:45.561 17:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:45.561 17:35:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.819 17:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:16:45.819 17:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:46.076 17:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:16:46.076 17:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:46.076 17:35:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:16:46.334 [2024-07-15 17:35:42.107552] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:46.334 17:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 9d7837b4-42d0-11ef-96ac-773515fba644 '!=' 9d7837b4-42d0-11ef-96ac-773515fba644 ']' 00:16:46.334 17:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 64624 00:16:46.334 17:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 64624 ']' 00:16:46.334 17:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 64624 00:16:46.334 17:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:16:46.334 17:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:16:46.334 17:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 64624 00:16:46.334 17:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:16:46.334 17:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:16:46.334 17:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:16:46.334 killing process with pid 64624 00:16:46.334 17:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64624' 00:16:46.334 17:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 64624 00:16:46.334 [2024-07-15 17:35:42.134755] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:46.334 [2024-07-15 17:35:42.134787] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:46.334 [2024-07-15 17:35:42.134805] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:46.334 [2024-07-15 17:35:42.134810] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25d56434780 name raid_bdev1, state offline 00:16:46.335 17:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 64624 00:16:46.335 [2024-07-15 17:35:42.158493] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:46.593 17:35:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:16:46.593 00:16:46.593 real 0m22.251s 00:16:46.593 user 0m40.677s 00:16:46.593 sys 0m2.977s 00:16:46.593 17:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:46.593 17:35:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.593 ************************************ 00:16:46.593 END TEST raid_superblock_test 00:16:46.593 ************************************ 00:16:46.593 17:35:42 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:46.593 17:35:42 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:16:46.593 17:35:42 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:46.593 17:35:42 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:46.593 17:35:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:46.593 ************************************ 00:16:46.593 START TEST raid_read_error_test 00:16:46.593 ************************************ 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 read 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.QE98aF67Tr 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=65264 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 65264 /var/tmp/spdk-raid.sock 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 65264 ']' 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:46.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:46.593 17:35:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:46.593 [2024-07-15 17:35:42.408280] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:16:46.593 [2024-07-15 17:35:42.408538] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:47.160 EAL: TSC is not safe to use in SMP mode 00:16:47.160 EAL: TSC is not invariant 00:16:47.160 [2024-07-15 17:35:42.931273] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.418 [2024-07-15 17:35:43.025767] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:47.418 [2024-07-15 17:35:43.028144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:47.418 [2024-07-15 17:35:43.029062] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:47.418 [2024-07-15 17:35:43.029078] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:47.984 17:35:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:47.984 17:35:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:16:47.984 17:35:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:47.984 17:35:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:47.984 BaseBdev1_malloc 00:16:47.984 17:35:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:48.243 true 00:16:48.501 17:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:48.501 [2024-07-15 17:35:44.318200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:48.501 [2024-07-15 17:35:44.318274] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:48.501 [2024-07-15 17:35:44.318303] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f7fe4c34780 00:16:48.501 [2024-07-15 17:35:44.318312] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:48.501 [2024-07-15 17:35:44.318984] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:48.501 [2024-07-15 17:35:44.319007] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:48.501 BaseBdev1 00:16:48.759 17:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:48.759 17:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:48.759 BaseBdev2_malloc 00:16:49.018 17:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:49.276 true 00:16:49.276 17:35:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:49.534 [2024-07-15 17:35:45.110203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:49.534 [2024-07-15 17:35:45.110259] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.534 [2024-07-15 17:35:45.110285] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f7fe4c34c80 00:16:49.534 [2024-07-15 17:35:45.110295] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.534 [2024-07-15 17:35:45.110952] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.534 [2024-07-15 17:35:45.110978] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:49.534 BaseBdev2 00:16:49.534 17:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:49.534 17:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:49.793 BaseBdev3_malloc 00:16:49.793 17:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:16:50.050 true 00:16:50.050 17:35:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:50.310 [2024-07-15 17:35:46.038221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:50.310 [2024-07-15 17:35:46.038278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:50.310 [2024-07-15 17:35:46.038306] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f7fe4c35180 00:16:50.310 [2024-07-15 17:35:46.038315] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:50.310 [2024-07-15 17:35:46.038980] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:50.310 [2024-07-15 17:35:46.039007] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:50.310 BaseBdev3 00:16:50.310 17:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:50.310 17:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:50.568 BaseBdev4_malloc 00:16:50.568 17:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:16:51.164 true 00:16:51.164 17:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:51.164 [2024-07-15 17:35:46.974342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:51.164 [2024-07-15 17:35:46.974432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:51.164 [2024-07-15 17:35:46.974471] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1f7fe4c35680 00:16:51.164 [2024-07-15 17:35:46.974481] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:51.164 [2024-07-15 17:35:46.975139] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:51.164 [2024-07-15 17:35:46.975165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:51.164 BaseBdev4 00:16:51.164 17:35:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:16:51.732 [2024-07-15 17:35:47.274346] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:51.732 [2024-07-15 17:35:47.274933] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:51.732 [2024-07-15 17:35:47.274960] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:51.732 [2024-07-15 17:35:47.274975] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:51.732 [2024-07-15 17:35:47.275044] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1f7fe4c35900 00:16:51.732 [2024-07-15 17:35:47.275050] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:51.732 [2024-07-15 17:35:47.275087] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f7fe4ca0e20 00:16:51.732 [2024-07-15 17:35:47.275168] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1f7fe4c35900 00:16:51.732 [2024-07-15 17:35:47.275172] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1f7fe4c35900 00:16:51.732 [2024-07-15 17:35:47.275201] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.732 17:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:51.732 17:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:51.732 17:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:51.732 17:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:51.732 17:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:51.732 17:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:51.732 17:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:51.732 17:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:51.732 17:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:51.732 17:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:51.732 17:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.732 17:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.732 17:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:51.732 "name": "raid_bdev1", 00:16:51.732 "uuid": "abae77b2-42d0-11ef-96ac-773515fba644", 00:16:51.732 "strip_size_kb": 0, 00:16:51.732 "state": "online", 00:16:51.732 "raid_level": "raid1", 00:16:51.732 "superblock": true, 00:16:51.732 "num_base_bdevs": 4, 00:16:51.732 "num_base_bdevs_discovered": 4, 00:16:51.732 "num_base_bdevs_operational": 4, 00:16:51.732 "base_bdevs_list": [ 00:16:51.732 { 00:16:51.732 "name": "BaseBdev1", 00:16:51.732 "uuid": "fdd4b78c-88da-1d5d-8d69-8333599e68d0", 00:16:51.732 "is_configured": true, 00:16:51.732 "data_offset": 2048, 00:16:51.732 "data_size": 63488 00:16:51.732 }, 00:16:51.732 { 00:16:51.732 "name": "BaseBdev2", 00:16:51.732 "uuid": "6b495513-bb46-0f5e-9a20-de00c85acf00", 00:16:51.732 "is_configured": true, 00:16:51.732 "data_offset": 2048, 00:16:51.732 "data_size": 63488 00:16:51.732 }, 00:16:51.732 { 00:16:51.732 "name": "BaseBdev3", 00:16:51.732 "uuid": "93a8ac68-a5ed-b154-97f2-54079175491b", 00:16:51.732 "is_configured": true, 00:16:51.732 "data_offset": 2048, 00:16:51.732 "data_size": 63488 00:16:51.732 }, 00:16:51.732 { 00:16:51.732 "name": "BaseBdev4", 00:16:51.732 "uuid": "4233a162-6f34-7c58-b447-04b46afbb357", 00:16:51.732 "is_configured": true, 00:16:51.732 "data_offset": 2048, 00:16:51.732 "data_size": 63488 00:16:51.732 } 00:16:51.732 ] 00:16:51.732 }' 00:16:51.732 17:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:51.732 17:35:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.300 17:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:52.300 17:35:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:52.300 [2024-07-15 17:35:48.030557] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1f7fe4ca0ec0 00:16:53.236 17:35:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:53.496 17:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:53.496 17:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:53.496 17:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:16:53.496 17:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:16:53.496 17:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:53.496 17:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:53.496 17:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:53.496 17:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:53.496 17:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:53.496 17:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:53.496 17:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:53.496 17:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:53.496 17:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:53.496 17:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:53.496 17:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.496 17:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:54.063 17:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:54.063 "name": "raid_bdev1", 00:16:54.063 "uuid": "abae77b2-42d0-11ef-96ac-773515fba644", 00:16:54.063 "strip_size_kb": 0, 00:16:54.063 "state": "online", 00:16:54.063 "raid_level": "raid1", 00:16:54.063 "superblock": true, 00:16:54.063 "num_base_bdevs": 4, 00:16:54.063 "num_base_bdevs_discovered": 4, 00:16:54.063 "num_base_bdevs_operational": 4, 00:16:54.063 "base_bdevs_list": [ 00:16:54.063 { 00:16:54.063 "name": "BaseBdev1", 00:16:54.063 "uuid": "fdd4b78c-88da-1d5d-8d69-8333599e68d0", 00:16:54.063 "is_configured": true, 00:16:54.063 "data_offset": 2048, 00:16:54.063 "data_size": 63488 00:16:54.063 }, 00:16:54.063 { 00:16:54.063 "name": "BaseBdev2", 00:16:54.063 "uuid": "6b495513-bb46-0f5e-9a20-de00c85acf00", 00:16:54.063 "is_configured": true, 00:16:54.063 "data_offset": 2048, 00:16:54.063 "data_size": 63488 00:16:54.063 }, 00:16:54.063 { 00:16:54.063 "name": "BaseBdev3", 00:16:54.063 "uuid": "93a8ac68-a5ed-b154-97f2-54079175491b", 00:16:54.063 "is_configured": true, 00:16:54.063 "data_offset": 2048, 00:16:54.063 "data_size": 63488 00:16:54.063 }, 00:16:54.063 { 00:16:54.063 "name": "BaseBdev4", 00:16:54.063 "uuid": "4233a162-6f34-7c58-b447-04b46afbb357", 00:16:54.063 "is_configured": true, 00:16:54.063 "data_offset": 2048, 00:16:54.063 "data_size": 63488 00:16:54.063 } 00:16:54.063 ] 00:16:54.063 }' 00:16:54.063 17:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:54.063 17:35:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.321 17:35:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:54.580 [2024-07-15 17:35:50.198123] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:54.580 [2024-07-15 17:35:50.198152] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:54.580 [2024-07-15 17:35:50.198470] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:54.580 [2024-07-15 17:35:50.198481] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:54.580 [2024-07-15 17:35:50.198500] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:54.580 [2024-07-15 17:35:50.198504] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1f7fe4c35900 name raid_bdev1, state offline 00:16:54.580 0 00:16:54.580 17:35:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 65264 00:16:54.580 17:35:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 65264 ']' 00:16:54.580 17:35:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 65264 00:16:54.580 17:35:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:16:54.580 17:35:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:16:54.580 17:35:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 65264 00:16:54.580 17:35:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:16:54.580 17:35:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:16:54.580 17:35:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:16:54.580 killing process with pid 65264 00:16:54.580 17:35:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65264' 00:16:54.580 17:35:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 65264 00:16:54.580 [2024-07-15 17:35:50.224807] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:54.580 17:35:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 65264 00:16:54.580 [2024-07-15 17:35:50.247815] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:54.839 17:35:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.QE98aF67Tr 00:16:54.839 17:35:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:54.839 17:35:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:54.839 17:35:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:16:54.839 17:35:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:16:54.839 17:35:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:54.839 17:35:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:54.839 17:35:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:54.839 00:16:54.839 real 0m8.042s 00:16:54.839 user 0m13.052s 00:16:54.839 sys 0m1.281s 00:16:54.839 17:35:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:54.839 17:35:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.839 ************************************ 00:16:54.839 END TEST raid_read_error_test 00:16:54.839 ************************************ 00:16:54.839 17:35:50 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:54.839 17:35:50 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:16:54.839 17:35:50 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:54.839 17:35:50 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:54.839 17:35:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:54.839 ************************************ 00:16:54.839 START TEST raid_write_error_test 00:16:54.839 ************************************ 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 write 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.7l9luDrwfN 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=65406 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 65406 /var/tmp/spdk-raid.sock 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 65406 ']' 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:54.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:54.839 17:35:50 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.839 [2024-07-15 17:35:50.491978] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:16:54.839 [2024-07-15 17:35:50.492154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:55.407 EAL: TSC is not safe to use in SMP mode 00:16:55.407 EAL: TSC is not invariant 00:16:55.407 [2024-07-15 17:35:51.079202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.407 [2024-07-15 17:35:51.187642] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:55.407 [2024-07-15 17:35:51.189956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.407 [2024-07-15 17:35:51.190758] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:55.407 [2024-07-15 17:35:51.190773] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:55.973 17:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:55.973 17:35:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:16:55.973 17:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:55.973 17:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:55.973 BaseBdev1_malloc 00:16:56.230 17:35:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:56.230 true 00:16:56.230 17:35:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:56.488 [2024-07-15 17:35:52.276691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:56.488 [2024-07-15 17:35:52.276752] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:56.488 [2024-07-15 17:35:52.276781] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2778b5234780 00:16:56.489 [2024-07-15 17:35:52.276789] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:56.489 [2024-07-15 17:35:52.277486] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:56.489 [2024-07-15 17:35:52.277511] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:56.489 BaseBdev1 00:16:56.489 17:35:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:56.489 17:35:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:57.055 BaseBdev2_malloc 00:16:57.055 17:35:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:57.055 true 00:16:57.055 17:35:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:57.621 [2024-07-15 17:35:53.144700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:57.621 [2024-07-15 17:35:53.144754] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:57.621 [2024-07-15 17:35:53.144782] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2778b5234c80 00:16:57.621 [2024-07-15 17:35:53.144799] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:57.621 [2024-07-15 17:35:53.145466] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:57.621 [2024-07-15 17:35:53.145491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:57.621 BaseBdev2 00:16:57.621 17:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:57.621 17:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:57.898 BaseBdev3_malloc 00:16:57.898 17:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:16:57.898 true 00:16:58.156 17:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:58.156 [2024-07-15 17:35:53.952716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:58.156 [2024-07-15 17:35:53.952779] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.156 [2024-07-15 17:35:53.952806] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2778b5235180 00:16:58.156 [2024-07-15 17:35:53.952815] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.156 [2024-07-15 17:35:53.953466] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.156 [2024-07-15 17:35:53.953491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:58.156 BaseBdev3 00:16:58.156 17:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:58.156 17:35:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:58.722 BaseBdev4_malloc 00:16:58.722 17:35:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:16:58.722 true 00:16:58.722 17:35:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:58.980 [2024-07-15 17:35:54.728728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:58.981 [2024-07-15 17:35:54.728791] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.981 [2024-07-15 17:35:54.728819] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2778b5235680 00:16:58.981 [2024-07-15 17:35:54.728828] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.981 [2024-07-15 17:35:54.729483] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.981 [2024-07-15 17:35:54.729508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:58.981 BaseBdev4 00:16:58.981 17:35:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:16:59.239 [2024-07-15 17:35:55.052745] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:59.239 [2024-07-15 17:35:55.053351] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:59.239 [2024-07-15 17:35:55.053376] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:59.239 [2024-07-15 17:35:55.053403] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:59.239 [2024-07-15 17:35:55.053474] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2778b5235900 00:16:59.239 [2024-07-15 17:35:55.053481] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:59.239 [2024-07-15 17:35:55.053518] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2778b52a0e20 00:16:59.239 [2024-07-15 17:35:55.053600] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2778b5235900 00:16:59.239 [2024-07-15 17:35:55.053605] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2778b5235900 00:16:59.239 [2024-07-15 17:35:55.053634] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:59.497 17:35:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:59.497 17:35:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:59.497 17:35:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:59.497 17:35:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:59.497 17:35:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:59.497 17:35:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:59.497 17:35:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:59.497 17:35:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:59.497 17:35:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:59.497 17:35:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:59.497 17:35:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.497 17:35:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:59.497 17:35:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:59.497 "name": "raid_bdev1", 00:16:59.497 "uuid": "b0515b83-42d0-11ef-96ac-773515fba644", 00:16:59.497 "strip_size_kb": 0, 00:16:59.497 "state": "online", 00:16:59.497 "raid_level": "raid1", 00:16:59.497 "superblock": true, 00:16:59.497 "num_base_bdevs": 4, 00:16:59.497 "num_base_bdevs_discovered": 4, 00:16:59.497 "num_base_bdevs_operational": 4, 00:16:59.497 "base_bdevs_list": [ 00:16:59.497 { 00:16:59.497 "name": "BaseBdev1", 00:16:59.497 "uuid": "a52f7777-ac24-7d51-8a2f-c9cf5466cbf2", 00:16:59.497 "is_configured": true, 00:16:59.497 "data_offset": 2048, 00:16:59.497 "data_size": 63488 00:16:59.497 }, 00:16:59.497 { 00:16:59.497 "name": "BaseBdev2", 00:16:59.497 "uuid": "a50bd947-9aab-a055-be69-a664a5826786", 00:16:59.497 "is_configured": true, 00:16:59.497 "data_offset": 2048, 00:16:59.497 "data_size": 63488 00:16:59.497 }, 00:16:59.497 { 00:16:59.497 "name": "BaseBdev3", 00:16:59.497 "uuid": "0398bea9-d330-8456-8a0c-259a7405fea9", 00:16:59.497 "is_configured": true, 00:16:59.497 "data_offset": 2048, 00:16:59.497 "data_size": 63488 00:16:59.497 }, 00:16:59.497 { 00:16:59.497 "name": "BaseBdev4", 00:16:59.497 "uuid": "99e420e3-3c05-2b57-b3b9-9480d13446a3", 00:16:59.497 "is_configured": true, 00:16:59.497 "data_offset": 2048, 00:16:59.498 "data_size": 63488 00:16:59.498 } 00:16:59.498 ] 00:16:59.498 }' 00:16:59.498 17:35:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:59.498 17:35:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.063 17:35:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:17:00.063 17:35:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:17:00.063 [2024-07-15 17:35:55.744940] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2778b52a0ec0 00:17:00.998 17:35:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:17:01.256 [2024-07-15 17:35:56.960946] bdev_raid.c:2222:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:17:01.256 [2024-07-15 17:35:56.960996] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:01.256 [2024-07-15 17:35:56.961129] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x2778b52a0ec0 00:17:01.256 17:35:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:17:01.256 17:35:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:17:01.256 17:35:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:17:01.256 17:35:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=3 00:17:01.256 17:35:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:17:01.256 17:35:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:01.256 17:35:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:01.256 17:35:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:01.256 17:35:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:01.256 17:35:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:17:01.256 17:35:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:01.256 17:35:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:01.256 17:35:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:01.256 17:35:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:01.256 17:35:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:01.256 17:35:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:01.528 17:35:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:01.528 "name": "raid_bdev1", 00:17:01.528 "uuid": "b0515b83-42d0-11ef-96ac-773515fba644", 00:17:01.528 "strip_size_kb": 0, 00:17:01.528 "state": "online", 00:17:01.528 "raid_level": "raid1", 00:17:01.528 "superblock": true, 00:17:01.528 "num_base_bdevs": 4, 00:17:01.528 "num_base_bdevs_discovered": 3, 00:17:01.528 "num_base_bdevs_operational": 3, 00:17:01.528 "base_bdevs_list": [ 00:17:01.528 { 00:17:01.528 "name": null, 00:17:01.528 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.528 "is_configured": false, 00:17:01.528 "data_offset": 2048, 00:17:01.528 "data_size": 63488 00:17:01.528 }, 00:17:01.528 { 00:17:01.528 "name": "BaseBdev2", 00:17:01.528 "uuid": "a50bd947-9aab-a055-be69-a664a5826786", 00:17:01.528 "is_configured": true, 00:17:01.528 "data_offset": 2048, 00:17:01.528 "data_size": 63488 00:17:01.528 }, 00:17:01.528 { 00:17:01.528 "name": "BaseBdev3", 00:17:01.528 "uuid": "0398bea9-d330-8456-8a0c-259a7405fea9", 00:17:01.528 "is_configured": true, 00:17:01.528 "data_offset": 2048, 00:17:01.528 "data_size": 63488 00:17:01.528 }, 00:17:01.528 { 00:17:01.528 "name": "BaseBdev4", 00:17:01.528 "uuid": "99e420e3-3c05-2b57-b3b9-9480d13446a3", 00:17:01.528 "is_configured": true, 00:17:01.528 "data_offset": 2048, 00:17:01.528 "data_size": 63488 00:17:01.528 } 00:17:01.528 ] 00:17:01.528 }' 00:17:01.528 17:35:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:01.528 17:35:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.803 17:35:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:02.062 [2024-07-15 17:35:57.835359] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:02.062 [2024-07-15 17:35:57.835401] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:02.062 [2024-07-15 17:35:57.836021] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:02.062 [2024-07-15 17:35:57.836050] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:02.062 [2024-07-15 17:35:57.836079] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:02.062 [2024-07-15 17:35:57.836087] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2778b5235900 name raid_bdev1, state offline 00:17:02.062 0 00:17:02.062 17:35:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 65406 00:17:02.062 17:35:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 65406 ']' 00:17:02.062 17:35:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 65406 00:17:02.062 17:35:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:17:02.062 17:35:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:17:02.062 17:35:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 65406 00:17:02.062 17:35:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:17:02.062 17:35:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:17:02.062 killing process with pid 65406 00:17:02.062 17:35:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:17:02.062 17:35:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65406' 00:17:02.062 17:35:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 65406 00:17:02.062 [2024-07-15 17:35:57.861849] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:02.062 17:35:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 65406 00:17:02.062 [2024-07-15 17:35:57.885734] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:02.321 17:35:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.7l9luDrwfN 00:17:02.321 17:35:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:17:02.321 17:35:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:17:02.321 17:35:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:17:02.321 17:35:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:17:02.321 17:35:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:02.321 17:35:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:17:02.321 17:35:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:17:02.321 00:17:02.321 real 0m7.607s 00:17:02.321 user 0m12.227s 00:17:02.321 sys 0m1.238s 00:17:02.321 17:35:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:02.321 ************************************ 00:17:02.321 END TEST raid_write_error_test 00:17:02.321 ************************************ 00:17:02.321 17:35:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.321 17:35:58 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:02.321 17:35:58 bdev_raid -- bdev/bdev_raid.sh@875 -- # '[' '' = true ']' 00:17:02.321 17:35:58 bdev_raid -- bdev/bdev_raid.sh@884 -- # '[' n == y ']' 00:17:02.321 17:35:58 bdev_raid -- bdev/bdev_raid.sh@896 -- # base_blocklen=4096 00:17:02.321 17:35:58 bdev_raid -- bdev/bdev_raid.sh@898 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:17:02.321 17:35:58 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:02.321 17:35:58 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:02.321 17:35:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:02.321 ************************************ 00:17:02.321 START TEST raid_state_function_test_sb_4k 00:17:02.321 ************************************ 00:17:02.321 17:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:17:02.321 17:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:17:02.321 17:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:17:02.321 17:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:17:02.321 17:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:02.321 17:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:02.321 17:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:02.321 17:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:02.321 17:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:02.321 17:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:02.321 17:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:02.321 17:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:02.321 17:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:02.321 17:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:02.321 17:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:02.321 17:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:02.321 17:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:02.321 17:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:02.321 17:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:02.321 17:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:17:02.321 17:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:17:02.321 17:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:17:02.321 17:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:17:02.321 17:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=65542 00:17:02.321 Process raid pid: 65542 00:17:02.321 17:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 65542' 00:17:02.321 17:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 65542 /var/tmp/spdk-raid.sock 00:17:02.321 17:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@829 -- # '[' -z 65542 ']' 00:17:02.321 17:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:02.321 17:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:02.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:02.321 17:35:58 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:02.321 17:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:02.321 17:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:02.321 17:35:58 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:02.321 [2024-07-15 17:35:58.136773] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:17:02.321 [2024-07-15 17:35:58.137035] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:02.895 EAL: TSC is not safe to use in SMP mode 00:17:02.895 EAL: TSC is not invariant 00:17:02.895 [2024-07-15 17:35:58.680518] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.153 [2024-07-15 17:35:58.768624] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:03.153 [2024-07-15 17:35:58.770808] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:03.153 [2024-07-15 17:35:58.771636] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:03.153 [2024-07-15 17:35:58.771653] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:03.717 17:35:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:03.717 17:35:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # return 0 00:17:03.717 17:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:03.717 [2024-07-15 17:35:59.495689] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:03.717 [2024-07-15 17:35:59.495774] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:03.717 [2024-07-15 17:35:59.495779] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:03.717 [2024-07-15 17:35:59.495804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:03.717 17:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:03.717 17:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:03.717 17:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:03.717 17:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:03.717 17:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:03.717 17:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:03.717 17:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:03.717 17:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:03.717 17:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:03.717 17:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:03.717 17:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.717 17:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.974 17:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:03.974 "name": "Existed_Raid", 00:17:03.974 "uuid": "b2f74c1f-42d0-11ef-96ac-773515fba644", 00:17:03.974 "strip_size_kb": 0, 00:17:03.974 "state": "configuring", 00:17:03.974 "raid_level": "raid1", 00:17:03.974 "superblock": true, 00:17:03.974 "num_base_bdevs": 2, 00:17:03.974 "num_base_bdevs_discovered": 0, 00:17:03.974 "num_base_bdevs_operational": 2, 00:17:03.974 "base_bdevs_list": [ 00:17:03.974 { 00:17:03.974 "name": "BaseBdev1", 00:17:03.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.974 "is_configured": false, 00:17:03.974 "data_offset": 0, 00:17:03.974 "data_size": 0 00:17:03.974 }, 00:17:03.974 { 00:17:03.974 "name": "BaseBdev2", 00:17:03.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.974 "is_configured": false, 00:17:03.974 "data_offset": 0, 00:17:03.974 "data_size": 0 00:17:03.974 } 00:17:03.974 ] 00:17:03.974 }' 00:17:03.974 17:35:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:03.974 17:35:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.538 17:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:04.538 [2024-07-15 17:36:00.363716] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:04.538 [2024-07-15 17:36:00.363765] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x314f0c34500 name Existed_Raid, state configuring 00:17:04.795 17:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:04.795 [2024-07-15 17:36:00.607754] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:04.795 [2024-07-15 17:36:00.607804] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:04.795 [2024-07-15 17:36:00.607810] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:04.795 [2024-07-15 17:36:00.607818] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:04.795 17:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:17:05.361 [2024-07-15 17:36:00.928822] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:05.361 BaseBdev1 00:17:05.361 17:36:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:05.361 17:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:05.361 17:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:05.361 17:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:17:05.361 17:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:05.361 17:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:05.361 17:36:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:05.618 17:36:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:05.876 [ 00:17:05.876 { 00:17:05.876 "name": "BaseBdev1", 00:17:05.876 "aliases": [ 00:17:05.876 "b3d1d0e8-42d0-11ef-96ac-773515fba644" 00:17:05.876 ], 00:17:05.876 "product_name": "Malloc disk", 00:17:05.876 "block_size": 4096, 00:17:05.876 "num_blocks": 8192, 00:17:05.876 "uuid": "b3d1d0e8-42d0-11ef-96ac-773515fba644", 00:17:05.876 "assigned_rate_limits": { 00:17:05.876 "rw_ios_per_sec": 0, 00:17:05.876 "rw_mbytes_per_sec": 0, 00:17:05.876 "r_mbytes_per_sec": 0, 00:17:05.876 "w_mbytes_per_sec": 0 00:17:05.876 }, 00:17:05.876 "claimed": true, 00:17:05.876 "claim_type": "exclusive_write", 00:17:05.876 "zoned": false, 00:17:05.876 "supported_io_types": { 00:17:05.876 "read": true, 00:17:05.876 "write": true, 00:17:05.876 "unmap": true, 00:17:05.876 "flush": true, 00:17:05.876 "reset": true, 00:17:05.876 "nvme_admin": false, 00:17:05.876 "nvme_io": false, 00:17:05.876 "nvme_io_md": false, 00:17:05.876 "write_zeroes": true, 00:17:05.876 "zcopy": true, 00:17:05.876 "get_zone_info": false, 00:17:05.876 "zone_management": false, 00:17:05.876 "zone_append": false, 00:17:05.876 "compare": false, 00:17:05.876 "compare_and_write": false, 00:17:05.876 "abort": true, 00:17:05.876 "seek_hole": false, 00:17:05.876 "seek_data": false, 00:17:05.876 "copy": true, 00:17:05.876 "nvme_iov_md": false 00:17:05.876 }, 00:17:05.876 "memory_domains": [ 00:17:05.876 { 00:17:05.876 "dma_device_id": "system", 00:17:05.876 "dma_device_type": 1 00:17:05.876 }, 00:17:05.876 { 00:17:05.876 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.876 "dma_device_type": 2 00:17:05.876 } 00:17:05.876 ], 00:17:05.876 "driver_specific": {} 00:17:05.876 } 00:17:05.876 ] 00:17:05.876 17:36:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:17:05.876 17:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:05.876 17:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:05.876 17:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:05.876 17:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:05.876 17:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:05.876 17:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:05.876 17:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:05.876 17:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:05.876 17:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:05.876 17:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:05.876 17:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.876 17:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.133 17:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:06.133 "name": "Existed_Raid", 00:17:06.133 "uuid": "b3a0fbe0-42d0-11ef-96ac-773515fba644", 00:17:06.133 "strip_size_kb": 0, 00:17:06.133 "state": "configuring", 00:17:06.133 "raid_level": "raid1", 00:17:06.133 "superblock": true, 00:17:06.133 "num_base_bdevs": 2, 00:17:06.133 "num_base_bdevs_discovered": 1, 00:17:06.133 "num_base_bdevs_operational": 2, 00:17:06.133 "base_bdevs_list": [ 00:17:06.133 { 00:17:06.133 "name": "BaseBdev1", 00:17:06.133 "uuid": "b3d1d0e8-42d0-11ef-96ac-773515fba644", 00:17:06.133 "is_configured": true, 00:17:06.133 "data_offset": 256, 00:17:06.133 "data_size": 7936 00:17:06.133 }, 00:17:06.133 { 00:17:06.133 "name": "BaseBdev2", 00:17:06.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.133 "is_configured": false, 00:17:06.133 "data_offset": 0, 00:17:06.133 "data_size": 0 00:17:06.133 } 00:17:06.133 ] 00:17:06.133 }' 00:17:06.133 17:36:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:06.133 17:36:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.391 17:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:06.649 [2024-07-15 17:36:02.255780] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:06.649 [2024-07-15 17:36:02.255827] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x314f0c34500 name Existed_Raid, state configuring 00:17:06.649 17:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:06.909 [2024-07-15 17:36:02.495813] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:06.909 [2024-07-15 17:36:02.496652] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:06.909 [2024-07-15 17:36:02.496689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:06.909 17:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:06.909 17:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:06.909 17:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:06.909 17:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:06.909 17:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:06.909 17:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:06.909 17:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:06.909 17:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:06.909 17:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:06.909 17:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:06.909 17:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:06.909 17:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:06.909 17:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.909 17:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.176 17:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:07.176 "name": "Existed_Raid", 00:17:07.176 "uuid": "b4c1144c-42d0-11ef-96ac-773515fba644", 00:17:07.176 "strip_size_kb": 0, 00:17:07.176 "state": "configuring", 00:17:07.176 "raid_level": "raid1", 00:17:07.176 "superblock": true, 00:17:07.176 "num_base_bdevs": 2, 00:17:07.176 "num_base_bdevs_discovered": 1, 00:17:07.176 "num_base_bdevs_operational": 2, 00:17:07.176 "base_bdevs_list": [ 00:17:07.176 { 00:17:07.176 "name": "BaseBdev1", 00:17:07.176 "uuid": "b3d1d0e8-42d0-11ef-96ac-773515fba644", 00:17:07.176 "is_configured": true, 00:17:07.176 "data_offset": 256, 00:17:07.176 "data_size": 7936 00:17:07.176 }, 00:17:07.176 { 00:17:07.176 "name": "BaseBdev2", 00:17:07.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.176 "is_configured": false, 00:17:07.176 "data_offset": 0, 00:17:07.176 "data_size": 0 00:17:07.176 } 00:17:07.176 ] 00:17:07.176 }' 00:17:07.176 17:36:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:07.176 17:36:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:07.434 17:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:17:07.692 [2024-07-15 17:36:03.383989] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:07.692 [2024-07-15 17:36:03.384053] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x314f0c34a00 00:17:07.692 [2024-07-15 17:36:03.384060] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:07.692 [2024-07-15 17:36:03.384096] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x314f0c97e20 00:17:07.692 [2024-07-15 17:36:03.384160] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x314f0c34a00 00:17:07.692 [2024-07-15 17:36:03.384164] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x314f0c34a00 00:17:07.692 [2024-07-15 17:36:03.384185] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.692 BaseBdev2 00:17:07.692 17:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:07.692 17:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:07.692 17:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:07.692 17:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:17:07.692 17:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:07.692 17:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:07.692 17:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:07.950 17:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:08.209 [ 00:17:08.209 { 00:17:08.209 "name": "BaseBdev2", 00:17:08.209 "aliases": [ 00:17:08.209 "b54895e9-42d0-11ef-96ac-773515fba644" 00:17:08.209 ], 00:17:08.209 "product_name": "Malloc disk", 00:17:08.209 "block_size": 4096, 00:17:08.209 "num_blocks": 8192, 00:17:08.209 "uuid": "b54895e9-42d0-11ef-96ac-773515fba644", 00:17:08.209 "assigned_rate_limits": { 00:17:08.209 "rw_ios_per_sec": 0, 00:17:08.209 "rw_mbytes_per_sec": 0, 00:17:08.209 "r_mbytes_per_sec": 0, 00:17:08.209 "w_mbytes_per_sec": 0 00:17:08.209 }, 00:17:08.209 "claimed": true, 00:17:08.209 "claim_type": "exclusive_write", 00:17:08.209 "zoned": false, 00:17:08.209 "supported_io_types": { 00:17:08.209 "read": true, 00:17:08.209 "write": true, 00:17:08.209 "unmap": true, 00:17:08.209 "flush": true, 00:17:08.209 "reset": true, 00:17:08.209 "nvme_admin": false, 00:17:08.209 "nvme_io": false, 00:17:08.209 "nvme_io_md": false, 00:17:08.209 "write_zeroes": true, 00:17:08.209 "zcopy": true, 00:17:08.209 "get_zone_info": false, 00:17:08.209 "zone_management": false, 00:17:08.209 "zone_append": false, 00:17:08.209 "compare": false, 00:17:08.209 "compare_and_write": false, 00:17:08.209 "abort": true, 00:17:08.209 "seek_hole": false, 00:17:08.209 "seek_data": false, 00:17:08.209 "copy": true, 00:17:08.209 "nvme_iov_md": false 00:17:08.209 }, 00:17:08.209 "memory_domains": [ 00:17:08.209 { 00:17:08.209 "dma_device_id": "system", 00:17:08.209 "dma_device_type": 1 00:17:08.209 }, 00:17:08.209 { 00:17:08.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.209 "dma_device_type": 2 00:17:08.209 } 00:17:08.209 ], 00:17:08.209 "driver_specific": {} 00:17:08.209 } 00:17:08.209 ] 00:17:08.209 17:36:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:17:08.209 17:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:08.209 17:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:08.209 17:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:08.209 17:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:08.209 17:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:08.209 17:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:08.209 17:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:08.209 17:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:08.209 17:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:08.209 17:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:08.209 17:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:08.209 17:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:08.209 17:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.209 17:36:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.467 17:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:08.467 "name": "Existed_Raid", 00:17:08.467 "uuid": "b4c1144c-42d0-11ef-96ac-773515fba644", 00:17:08.467 "strip_size_kb": 0, 00:17:08.467 "state": "online", 00:17:08.467 "raid_level": "raid1", 00:17:08.467 "superblock": true, 00:17:08.467 "num_base_bdevs": 2, 00:17:08.467 "num_base_bdevs_discovered": 2, 00:17:08.468 "num_base_bdevs_operational": 2, 00:17:08.468 "base_bdevs_list": [ 00:17:08.468 { 00:17:08.468 "name": "BaseBdev1", 00:17:08.468 "uuid": "b3d1d0e8-42d0-11ef-96ac-773515fba644", 00:17:08.468 "is_configured": true, 00:17:08.468 "data_offset": 256, 00:17:08.468 "data_size": 7936 00:17:08.468 }, 00:17:08.468 { 00:17:08.468 "name": "BaseBdev2", 00:17:08.468 "uuid": "b54895e9-42d0-11ef-96ac-773515fba644", 00:17:08.468 "is_configured": true, 00:17:08.468 "data_offset": 256, 00:17:08.468 "data_size": 7936 00:17:08.468 } 00:17:08.468 ] 00:17:08.468 }' 00:17:08.468 17:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:08.468 17:36:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.727 17:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:08.727 17:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:08.727 17:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:08.727 17:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:08.727 17:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:08.727 17:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:17:08.727 17:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:08.727 17:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:08.986 [2024-07-15 17:36:04.791919] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:08.986 17:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:08.986 "name": "Existed_Raid", 00:17:08.986 "aliases": [ 00:17:08.986 "b4c1144c-42d0-11ef-96ac-773515fba644" 00:17:08.986 ], 00:17:08.986 "product_name": "Raid Volume", 00:17:08.986 "block_size": 4096, 00:17:08.986 "num_blocks": 7936, 00:17:08.986 "uuid": "b4c1144c-42d0-11ef-96ac-773515fba644", 00:17:08.986 "assigned_rate_limits": { 00:17:08.986 "rw_ios_per_sec": 0, 00:17:08.986 "rw_mbytes_per_sec": 0, 00:17:08.986 "r_mbytes_per_sec": 0, 00:17:08.986 "w_mbytes_per_sec": 0 00:17:08.986 }, 00:17:08.986 "claimed": false, 00:17:08.986 "zoned": false, 00:17:08.986 "supported_io_types": { 00:17:08.986 "read": true, 00:17:08.986 "write": true, 00:17:08.986 "unmap": false, 00:17:08.986 "flush": false, 00:17:08.986 "reset": true, 00:17:08.986 "nvme_admin": false, 00:17:08.986 "nvme_io": false, 00:17:08.986 "nvme_io_md": false, 00:17:08.986 "write_zeroes": true, 00:17:08.986 "zcopy": false, 00:17:08.986 "get_zone_info": false, 00:17:08.986 "zone_management": false, 00:17:08.986 "zone_append": false, 00:17:08.986 "compare": false, 00:17:08.986 "compare_and_write": false, 00:17:08.986 "abort": false, 00:17:08.986 "seek_hole": false, 00:17:08.986 "seek_data": false, 00:17:08.986 "copy": false, 00:17:08.986 "nvme_iov_md": false 00:17:08.986 }, 00:17:08.986 "memory_domains": [ 00:17:08.986 { 00:17:08.986 "dma_device_id": "system", 00:17:08.986 "dma_device_type": 1 00:17:08.986 }, 00:17:08.986 { 00:17:08.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.986 "dma_device_type": 2 00:17:08.986 }, 00:17:08.986 { 00:17:08.986 "dma_device_id": "system", 00:17:08.986 "dma_device_type": 1 00:17:08.986 }, 00:17:08.986 { 00:17:08.986 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.986 "dma_device_type": 2 00:17:08.986 } 00:17:08.986 ], 00:17:08.986 "driver_specific": { 00:17:08.986 "raid": { 00:17:08.986 "uuid": "b4c1144c-42d0-11ef-96ac-773515fba644", 00:17:08.986 "strip_size_kb": 0, 00:17:08.986 "state": "online", 00:17:08.986 "raid_level": "raid1", 00:17:08.986 "superblock": true, 00:17:08.986 "num_base_bdevs": 2, 00:17:08.986 "num_base_bdevs_discovered": 2, 00:17:08.986 "num_base_bdevs_operational": 2, 00:17:08.986 "base_bdevs_list": [ 00:17:08.986 { 00:17:08.986 "name": "BaseBdev1", 00:17:08.986 "uuid": "b3d1d0e8-42d0-11ef-96ac-773515fba644", 00:17:08.986 "is_configured": true, 00:17:08.986 "data_offset": 256, 00:17:08.986 "data_size": 7936 00:17:08.986 }, 00:17:08.986 { 00:17:08.986 "name": "BaseBdev2", 00:17:08.986 "uuid": "b54895e9-42d0-11ef-96ac-773515fba644", 00:17:08.986 "is_configured": true, 00:17:08.986 "data_offset": 256, 00:17:08.986 "data_size": 7936 00:17:08.986 } 00:17:08.986 ] 00:17:08.986 } 00:17:08.986 } 00:17:08.986 }' 00:17:08.986 17:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:09.245 17:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:09.245 BaseBdev2' 00:17:09.245 17:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:09.245 17:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:09.245 17:36:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:09.245 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:09.245 "name": "BaseBdev1", 00:17:09.245 "aliases": [ 00:17:09.245 "b3d1d0e8-42d0-11ef-96ac-773515fba644" 00:17:09.245 ], 00:17:09.245 "product_name": "Malloc disk", 00:17:09.245 "block_size": 4096, 00:17:09.245 "num_blocks": 8192, 00:17:09.245 "uuid": "b3d1d0e8-42d0-11ef-96ac-773515fba644", 00:17:09.245 "assigned_rate_limits": { 00:17:09.245 "rw_ios_per_sec": 0, 00:17:09.245 "rw_mbytes_per_sec": 0, 00:17:09.245 "r_mbytes_per_sec": 0, 00:17:09.245 "w_mbytes_per_sec": 0 00:17:09.245 }, 00:17:09.245 "claimed": true, 00:17:09.245 "claim_type": "exclusive_write", 00:17:09.245 "zoned": false, 00:17:09.245 "supported_io_types": { 00:17:09.245 "read": true, 00:17:09.245 "write": true, 00:17:09.245 "unmap": true, 00:17:09.245 "flush": true, 00:17:09.245 "reset": true, 00:17:09.245 "nvme_admin": false, 00:17:09.245 "nvme_io": false, 00:17:09.245 "nvme_io_md": false, 00:17:09.245 "write_zeroes": true, 00:17:09.245 "zcopy": true, 00:17:09.245 "get_zone_info": false, 00:17:09.245 "zone_management": false, 00:17:09.245 "zone_append": false, 00:17:09.245 "compare": false, 00:17:09.245 "compare_and_write": false, 00:17:09.245 "abort": true, 00:17:09.245 "seek_hole": false, 00:17:09.245 "seek_data": false, 00:17:09.245 "copy": true, 00:17:09.245 "nvme_iov_md": false 00:17:09.245 }, 00:17:09.245 "memory_domains": [ 00:17:09.245 { 00:17:09.245 "dma_device_id": "system", 00:17:09.245 "dma_device_type": 1 00:17:09.245 }, 00:17:09.245 { 00:17:09.245 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.245 "dma_device_type": 2 00:17:09.245 } 00:17:09.245 ], 00:17:09.245 "driver_specific": {} 00:17:09.245 }' 00:17:09.245 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:09.245 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:09.245 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:09.245 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:09.504 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:09.504 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:09.504 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:09.504 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:09.504 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:09.504 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:09.504 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:09.504 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:09.504 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:09.504 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:09.504 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:09.762 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:09.762 "name": "BaseBdev2", 00:17:09.762 "aliases": [ 00:17:09.762 "b54895e9-42d0-11ef-96ac-773515fba644" 00:17:09.762 ], 00:17:09.762 "product_name": "Malloc disk", 00:17:09.762 "block_size": 4096, 00:17:09.762 "num_blocks": 8192, 00:17:09.762 "uuid": "b54895e9-42d0-11ef-96ac-773515fba644", 00:17:09.762 "assigned_rate_limits": { 00:17:09.762 "rw_ios_per_sec": 0, 00:17:09.762 "rw_mbytes_per_sec": 0, 00:17:09.762 "r_mbytes_per_sec": 0, 00:17:09.762 "w_mbytes_per_sec": 0 00:17:09.762 }, 00:17:09.762 "claimed": true, 00:17:09.762 "claim_type": "exclusive_write", 00:17:09.762 "zoned": false, 00:17:09.762 "supported_io_types": { 00:17:09.762 "read": true, 00:17:09.762 "write": true, 00:17:09.762 "unmap": true, 00:17:09.762 "flush": true, 00:17:09.762 "reset": true, 00:17:09.762 "nvme_admin": false, 00:17:09.762 "nvme_io": false, 00:17:09.762 "nvme_io_md": false, 00:17:09.762 "write_zeroes": true, 00:17:09.762 "zcopy": true, 00:17:09.762 "get_zone_info": false, 00:17:09.762 "zone_management": false, 00:17:09.762 "zone_append": false, 00:17:09.762 "compare": false, 00:17:09.762 "compare_and_write": false, 00:17:09.762 "abort": true, 00:17:09.762 "seek_hole": false, 00:17:09.762 "seek_data": false, 00:17:09.762 "copy": true, 00:17:09.762 "nvme_iov_md": false 00:17:09.762 }, 00:17:09.762 "memory_domains": [ 00:17:09.762 { 00:17:09.762 "dma_device_id": "system", 00:17:09.762 "dma_device_type": 1 00:17:09.762 }, 00:17:09.762 { 00:17:09.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.762 "dma_device_type": 2 00:17:09.763 } 00:17:09.763 ], 00:17:09.763 "driver_specific": {} 00:17:09.763 }' 00:17:09.763 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:09.763 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:09.763 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:09.763 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:09.763 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:09.763 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:09.763 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:09.763 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:09.763 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:09.763 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:09.763 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:09.763 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:09.763 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:10.020 [2024-07-15 17:36:05.651909] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:10.020 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:10.020 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:17:10.020 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:10.020 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:17:10.020 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:17:10.020 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:10.020 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:10.020 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:10.020 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:10.020 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:10.020 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:10.020 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:10.020 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:10.020 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:10.020 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:10.020 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.020 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.279 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:10.279 "name": "Existed_Raid", 00:17:10.279 "uuid": "b4c1144c-42d0-11ef-96ac-773515fba644", 00:17:10.279 "strip_size_kb": 0, 00:17:10.279 "state": "online", 00:17:10.279 "raid_level": "raid1", 00:17:10.279 "superblock": true, 00:17:10.279 "num_base_bdevs": 2, 00:17:10.279 "num_base_bdevs_discovered": 1, 00:17:10.279 "num_base_bdevs_operational": 1, 00:17:10.279 "base_bdevs_list": [ 00:17:10.279 { 00:17:10.279 "name": null, 00:17:10.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.279 "is_configured": false, 00:17:10.279 "data_offset": 256, 00:17:10.279 "data_size": 7936 00:17:10.279 }, 00:17:10.279 { 00:17:10.279 "name": "BaseBdev2", 00:17:10.279 "uuid": "b54895e9-42d0-11ef-96ac-773515fba644", 00:17:10.279 "is_configured": true, 00:17:10.279 "data_offset": 256, 00:17:10.279 "data_size": 7936 00:17:10.279 } 00:17:10.279 ] 00:17:10.279 }' 00:17:10.279 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:10.279 17:36:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.607 17:36:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:10.607 17:36:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:10.607 17:36:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:10.607 17:36:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:10.866 17:36:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:10.866 17:36:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:10.866 17:36:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:11.126 [2024-07-15 17:36:06.762000] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:11.126 [2024-07-15 17:36:06.762043] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:11.126 [2024-07-15 17:36:06.768439] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:11.126 [2024-07-15 17:36:06.768461] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:11.126 [2024-07-15 17:36:06.768466] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x314f0c34a00 name Existed_Raid, state offline 00:17:11.126 17:36:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:11.126 17:36:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:11.126 17:36:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.126 17:36:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:11.386 17:36:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:11.386 17:36:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:11.386 17:36:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:17:11.386 17:36:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 65542 00:17:11.386 17:36:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@948 -- # '[' -z 65542 ']' 00:17:11.386 17:36:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # kill -0 65542 00:17:11.386 17:36:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # uname 00:17:11.386 17:36:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:17:11.386 17:36:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps -c -o command 65542 00:17:11.386 17:36:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # tail -1 00:17:11.386 17:36:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:17:11.386 17:36:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:17:11.386 killing process with pid 65542 00:17:11.386 17:36:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65542' 00:17:11.386 17:36:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@967 -- # kill 65542 00:17:11.386 [2024-07-15 17:36:07.070447] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:11.386 [2024-07-15 17:36:07.070480] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:11.386 17:36:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # wait 65542 00:17:11.645 17:36:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:17:11.645 00:17:11.645 real 0m9.130s 00:17:11.645 user 0m16.013s 00:17:11.645 sys 0m1.474s 00:17:11.645 17:36:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:11.645 ************************************ 00:17:11.645 END TEST raid_state_function_test_sb_4k 00:17:11.645 ************************************ 00:17:11.645 17:36:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.645 17:36:07 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:11.645 17:36:07 bdev_raid -- bdev/bdev_raid.sh@899 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:17:11.645 17:36:07 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:11.645 17:36:07 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:11.645 17:36:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:11.645 ************************************ 00:17:11.645 START TEST raid_superblock_test_4k 00:17:11.645 ************************************ 00:17:11.645 17:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:17:11.645 17:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:17:11.645 17:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:17:11.645 17:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:17:11.645 17:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:17:11.645 17:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:17:11.645 17:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:17:11.645 17:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:17:11.645 17:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:17:11.645 17:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:17:11.645 17:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local strip_size 00:17:11.645 17:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:17:11.645 17:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:17:11.646 17:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:17:11.646 17:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:17:11.646 17:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:17:11.646 17:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # raid_pid=65816 00:17:11.646 17:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # waitforlisten 65816 /var/tmp/spdk-raid.sock 00:17:11.646 17:36:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:11.646 17:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@829 -- # '[' -z 65816 ']' 00:17:11.646 17:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:11.646 17:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:11.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:11.646 17:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:11.646 17:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:11.646 17:36:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:11.646 [2024-07-15 17:36:07.315608] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:17:11.646 [2024-07-15 17:36:07.315810] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:12.213 EAL: TSC is not safe to use in SMP mode 00:17:12.213 EAL: TSC is not invariant 00:17:12.213 [2024-07-15 17:36:07.861172] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.213 [2024-07-15 17:36:07.946644] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:12.213 [2024-07-15 17:36:07.948787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.213 [2024-07-15 17:36:07.949554] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:12.213 [2024-07-15 17:36:07.949568] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:12.780 17:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:12.780 17:36:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # return 0 00:17:12.780 17:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:17:12.780 17:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:12.780 17:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:17:12.780 17:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:17:12.780 17:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:12.780 17:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:12.780 17:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:12.780 17:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:12.780 17:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:17:12.780 malloc1 00:17:12.780 17:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:13.038 [2024-07-15 17:36:08.805499] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:13.038 [2024-07-15 17:36:08.805590] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.038 [2024-07-15 17:36:08.805618] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x373d3f834780 00:17:13.038 [2024-07-15 17:36:08.805626] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.038 [2024-07-15 17:36:08.806549] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.038 [2024-07-15 17:36:08.806589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:13.038 pt1 00:17:13.038 17:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:13.038 17:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:13.039 17:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:17:13.039 17:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:17:13.039 17:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:13.039 17:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:13.039 17:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:13.039 17:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:13.039 17:36:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:17:13.297 malloc2 00:17:13.297 17:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:13.556 [2024-07-15 17:36:09.341537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:13.556 [2024-07-15 17:36:09.341603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.556 [2024-07-15 17:36:09.341632] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x373d3f834c80 00:17:13.556 [2024-07-15 17:36:09.341641] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.556 [2024-07-15 17:36:09.342300] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.556 [2024-07-15 17:36:09.342323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:13.556 pt2 00:17:13.556 17:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:13.556 17:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:13.556 17:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:17:13.813 [2024-07-15 17:36:09.593587] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:13.813 [2024-07-15 17:36:09.594178] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:13.813 [2024-07-15 17:36:09.594240] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x373d3f834f00 00:17:13.813 [2024-07-15 17:36:09.594247] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:13.813 [2024-07-15 17:36:09.594284] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x373d3f897e20 00:17:13.813 [2024-07-15 17:36:09.594359] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x373d3f834f00 00:17:13.813 [2024-07-15 17:36:09.594363] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x373d3f834f00 00:17:13.813 [2024-07-15 17:36:09.594389] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:13.813 17:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:13.813 17:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:13.813 17:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:13.813 17:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:13.813 17:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:13.813 17:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:13.813 17:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:13.813 17:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:13.813 17:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:13.813 17:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:13.813 17:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.813 17:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.071 17:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:14.071 "name": "raid_bdev1", 00:17:14.071 "uuid": "b8fc1d12-42d0-11ef-96ac-773515fba644", 00:17:14.071 "strip_size_kb": 0, 00:17:14.071 "state": "online", 00:17:14.071 "raid_level": "raid1", 00:17:14.071 "superblock": true, 00:17:14.071 "num_base_bdevs": 2, 00:17:14.071 "num_base_bdevs_discovered": 2, 00:17:14.071 "num_base_bdevs_operational": 2, 00:17:14.071 "base_bdevs_list": [ 00:17:14.071 { 00:17:14.071 "name": "pt1", 00:17:14.071 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:14.071 "is_configured": true, 00:17:14.071 "data_offset": 256, 00:17:14.071 "data_size": 7936 00:17:14.071 }, 00:17:14.071 { 00:17:14.071 "name": "pt2", 00:17:14.071 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:14.071 "is_configured": true, 00:17:14.071 "data_offset": 256, 00:17:14.071 "data_size": 7936 00:17:14.071 } 00:17:14.071 ] 00:17:14.071 }' 00:17:14.071 17:36:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:14.071 17:36:09 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:14.637 17:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:17:14.637 17:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:14.637 17:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:14.637 17:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:14.637 17:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:14.637 17:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:17:14.637 17:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:14.637 17:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:14.637 [2024-07-15 17:36:10.389642] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:14.637 17:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:14.637 "name": "raid_bdev1", 00:17:14.637 "aliases": [ 00:17:14.637 "b8fc1d12-42d0-11ef-96ac-773515fba644" 00:17:14.637 ], 00:17:14.637 "product_name": "Raid Volume", 00:17:14.637 "block_size": 4096, 00:17:14.637 "num_blocks": 7936, 00:17:14.637 "uuid": "b8fc1d12-42d0-11ef-96ac-773515fba644", 00:17:14.637 "assigned_rate_limits": { 00:17:14.637 "rw_ios_per_sec": 0, 00:17:14.637 "rw_mbytes_per_sec": 0, 00:17:14.637 "r_mbytes_per_sec": 0, 00:17:14.637 "w_mbytes_per_sec": 0 00:17:14.637 }, 00:17:14.637 "claimed": false, 00:17:14.637 "zoned": false, 00:17:14.637 "supported_io_types": { 00:17:14.637 "read": true, 00:17:14.637 "write": true, 00:17:14.637 "unmap": false, 00:17:14.637 "flush": false, 00:17:14.637 "reset": true, 00:17:14.637 "nvme_admin": false, 00:17:14.637 "nvme_io": false, 00:17:14.637 "nvme_io_md": false, 00:17:14.637 "write_zeroes": true, 00:17:14.637 "zcopy": false, 00:17:14.637 "get_zone_info": false, 00:17:14.637 "zone_management": false, 00:17:14.637 "zone_append": false, 00:17:14.637 "compare": false, 00:17:14.637 "compare_and_write": false, 00:17:14.637 "abort": false, 00:17:14.637 "seek_hole": false, 00:17:14.637 "seek_data": false, 00:17:14.637 "copy": false, 00:17:14.637 "nvme_iov_md": false 00:17:14.637 }, 00:17:14.637 "memory_domains": [ 00:17:14.637 { 00:17:14.637 "dma_device_id": "system", 00:17:14.637 "dma_device_type": 1 00:17:14.637 }, 00:17:14.637 { 00:17:14.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.637 "dma_device_type": 2 00:17:14.637 }, 00:17:14.637 { 00:17:14.637 "dma_device_id": "system", 00:17:14.637 "dma_device_type": 1 00:17:14.637 }, 00:17:14.637 { 00:17:14.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.637 "dma_device_type": 2 00:17:14.637 } 00:17:14.637 ], 00:17:14.637 "driver_specific": { 00:17:14.637 "raid": { 00:17:14.637 "uuid": "b8fc1d12-42d0-11ef-96ac-773515fba644", 00:17:14.637 "strip_size_kb": 0, 00:17:14.637 "state": "online", 00:17:14.637 "raid_level": "raid1", 00:17:14.637 "superblock": true, 00:17:14.637 "num_base_bdevs": 2, 00:17:14.637 "num_base_bdevs_discovered": 2, 00:17:14.637 "num_base_bdevs_operational": 2, 00:17:14.637 "base_bdevs_list": [ 00:17:14.637 { 00:17:14.637 "name": "pt1", 00:17:14.637 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:14.637 "is_configured": true, 00:17:14.637 "data_offset": 256, 00:17:14.637 "data_size": 7936 00:17:14.637 }, 00:17:14.637 { 00:17:14.637 "name": "pt2", 00:17:14.637 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:14.637 "is_configured": true, 00:17:14.637 "data_offset": 256, 00:17:14.637 "data_size": 7936 00:17:14.637 } 00:17:14.637 ] 00:17:14.637 } 00:17:14.637 } 00:17:14.637 }' 00:17:14.637 17:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:14.637 17:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:14.637 pt2' 00:17:14.637 17:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:14.637 17:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:14.637 17:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:14.896 17:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:14.896 "name": "pt1", 00:17:14.896 "aliases": [ 00:17:14.896 "00000000-0000-0000-0000-000000000001" 00:17:14.896 ], 00:17:14.896 "product_name": "passthru", 00:17:14.896 "block_size": 4096, 00:17:14.896 "num_blocks": 8192, 00:17:14.896 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:14.896 "assigned_rate_limits": { 00:17:14.896 "rw_ios_per_sec": 0, 00:17:14.896 "rw_mbytes_per_sec": 0, 00:17:14.896 "r_mbytes_per_sec": 0, 00:17:14.896 "w_mbytes_per_sec": 0 00:17:14.896 }, 00:17:14.896 "claimed": true, 00:17:14.896 "claim_type": "exclusive_write", 00:17:14.896 "zoned": false, 00:17:14.896 "supported_io_types": { 00:17:14.896 "read": true, 00:17:14.896 "write": true, 00:17:14.896 "unmap": true, 00:17:14.896 "flush": true, 00:17:14.896 "reset": true, 00:17:14.896 "nvme_admin": false, 00:17:14.896 "nvme_io": false, 00:17:14.896 "nvme_io_md": false, 00:17:14.896 "write_zeroes": true, 00:17:14.896 "zcopy": true, 00:17:14.896 "get_zone_info": false, 00:17:14.896 "zone_management": false, 00:17:14.896 "zone_append": false, 00:17:14.896 "compare": false, 00:17:14.896 "compare_and_write": false, 00:17:14.896 "abort": true, 00:17:14.896 "seek_hole": false, 00:17:14.896 "seek_data": false, 00:17:14.896 "copy": true, 00:17:14.896 "nvme_iov_md": false 00:17:14.896 }, 00:17:14.896 "memory_domains": [ 00:17:14.896 { 00:17:14.896 "dma_device_id": "system", 00:17:14.896 "dma_device_type": 1 00:17:14.896 }, 00:17:14.896 { 00:17:14.896 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:14.896 "dma_device_type": 2 00:17:14.896 } 00:17:14.896 ], 00:17:14.896 "driver_specific": { 00:17:14.896 "passthru": { 00:17:14.896 "name": "pt1", 00:17:14.896 "base_bdev_name": "malloc1" 00:17:14.896 } 00:17:14.896 } 00:17:14.896 }' 00:17:14.896 17:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:14.896 17:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:14.896 17:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:14.896 17:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:14.896 17:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:14.896 17:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:14.896 17:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:15.153 17:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:15.153 17:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:15.153 17:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:15.153 17:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:15.153 17:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:15.153 17:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:15.153 17:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:15.153 17:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:15.412 17:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:15.412 "name": "pt2", 00:17:15.412 "aliases": [ 00:17:15.412 "00000000-0000-0000-0000-000000000002" 00:17:15.412 ], 00:17:15.412 "product_name": "passthru", 00:17:15.412 "block_size": 4096, 00:17:15.412 "num_blocks": 8192, 00:17:15.412 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:15.412 "assigned_rate_limits": { 00:17:15.412 "rw_ios_per_sec": 0, 00:17:15.412 "rw_mbytes_per_sec": 0, 00:17:15.412 "r_mbytes_per_sec": 0, 00:17:15.412 "w_mbytes_per_sec": 0 00:17:15.412 }, 00:17:15.412 "claimed": true, 00:17:15.412 "claim_type": "exclusive_write", 00:17:15.412 "zoned": false, 00:17:15.412 "supported_io_types": { 00:17:15.412 "read": true, 00:17:15.412 "write": true, 00:17:15.412 "unmap": true, 00:17:15.412 "flush": true, 00:17:15.412 "reset": true, 00:17:15.412 "nvme_admin": false, 00:17:15.412 "nvme_io": false, 00:17:15.412 "nvme_io_md": false, 00:17:15.412 "write_zeroes": true, 00:17:15.412 "zcopy": true, 00:17:15.412 "get_zone_info": false, 00:17:15.412 "zone_management": false, 00:17:15.412 "zone_append": false, 00:17:15.412 "compare": false, 00:17:15.412 "compare_and_write": false, 00:17:15.412 "abort": true, 00:17:15.412 "seek_hole": false, 00:17:15.412 "seek_data": false, 00:17:15.412 "copy": true, 00:17:15.412 "nvme_iov_md": false 00:17:15.412 }, 00:17:15.412 "memory_domains": [ 00:17:15.412 { 00:17:15.412 "dma_device_id": "system", 00:17:15.412 "dma_device_type": 1 00:17:15.412 }, 00:17:15.412 { 00:17:15.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.412 "dma_device_type": 2 00:17:15.412 } 00:17:15.412 ], 00:17:15.412 "driver_specific": { 00:17:15.412 "passthru": { 00:17:15.412 "name": "pt2", 00:17:15.412 "base_bdev_name": "malloc2" 00:17:15.412 } 00:17:15.412 } 00:17:15.412 }' 00:17:15.412 17:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:15.412 17:36:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:15.412 17:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:15.412 17:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:15.412 17:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:15.412 17:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:15.412 17:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:15.412 17:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:15.412 17:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:15.412 17:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:15.412 17:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:15.412 17:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:15.412 17:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:15.412 17:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:17:15.669 [2024-07-15 17:36:11.293668] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:15.669 17:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=b8fc1d12-42d0-11ef-96ac-773515fba644 00:17:15.670 17:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # '[' -z b8fc1d12-42d0-11ef-96ac-773515fba644 ']' 00:17:15.670 17:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:15.928 [2024-07-15 17:36:11.537612] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:15.928 [2024-07-15 17:36:11.537638] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:15.928 [2024-07-15 17:36:11.537676] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:15.928 [2024-07-15 17:36:11.537691] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:15.928 [2024-07-15 17:36:11.537695] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x373d3f834f00 name raid_bdev1, state offline 00:17:15.928 17:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.928 17:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:17:16.185 17:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:17:16.185 17:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:17:16.185 17:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:16.185 17:36:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:16.456 17:36:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:16.456 17:36:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:16.716 17:36:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:16.716 17:36:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:16.974 17:36:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:17:16.974 17:36:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:16.974 17:36:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@648 -- # local es=0 00:17:16.974 17:36:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:16.974 17:36:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:16.974 17:36:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:16.974 17:36:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:16.974 17:36:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:16.974 17:36:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:16.974 17:36:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:16.974 17:36:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:16.974 17:36:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:16.974 17:36:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:17.231 [2024-07-15 17:36:12.809670] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:17.231 [2024-07-15 17:36:12.810257] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:17.231 [2024-07-15 17:36:12.810282] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:17.231 [2024-07-15 17:36:12.810321] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:17.231 [2024-07-15 17:36:12.810332] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:17.231 [2024-07-15 17:36:12.810336] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x373d3f834c80 name raid_bdev1, state configuring 00:17:17.231 request: 00:17:17.231 { 00:17:17.231 "name": "raid_bdev1", 00:17:17.231 "raid_level": "raid1", 00:17:17.231 "base_bdevs": [ 00:17:17.231 "malloc1", 00:17:17.231 "malloc2" 00:17:17.231 ], 00:17:17.231 "superblock": false, 00:17:17.231 "method": "bdev_raid_create", 00:17:17.231 "req_id": 1 00:17:17.231 } 00:17:17.231 Got JSON-RPC error response 00:17:17.231 response: 00:17:17.231 { 00:17:17.231 "code": -17, 00:17:17.231 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:17.231 } 00:17:17.231 17:36:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # es=1 00:17:17.231 17:36:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:17.231 17:36:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:17.231 17:36:12 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:17.231 17:36:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.231 17:36:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:17:17.489 17:36:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:17:17.489 17:36:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:17:17.489 17:36:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:17.748 [2024-07-15 17:36:13.409686] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:17.748 [2024-07-15 17:36:13.409801] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.748 [2024-07-15 17:36:13.409830] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x373d3f834780 00:17:17.748 [2024-07-15 17:36:13.409838] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.748 [2024-07-15 17:36:13.410491] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.748 [2024-07-15 17:36:13.410522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:17.748 [2024-07-15 17:36:13.410548] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:17.748 [2024-07-15 17:36:13.410560] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:17.748 pt1 00:17:17.748 17:36:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:17.748 17:36:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:17.748 17:36:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:17.748 17:36:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:17.748 17:36:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:17.748 17:36:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:17.748 17:36:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:17.748 17:36:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:17.748 17:36:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:17.748 17:36:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:17.748 17:36:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.748 17:36:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.006 17:36:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:18.006 "name": "raid_bdev1", 00:17:18.006 "uuid": "b8fc1d12-42d0-11ef-96ac-773515fba644", 00:17:18.006 "strip_size_kb": 0, 00:17:18.006 "state": "configuring", 00:17:18.006 "raid_level": "raid1", 00:17:18.006 "superblock": true, 00:17:18.006 "num_base_bdevs": 2, 00:17:18.006 "num_base_bdevs_discovered": 1, 00:17:18.006 "num_base_bdevs_operational": 2, 00:17:18.006 "base_bdevs_list": [ 00:17:18.006 { 00:17:18.006 "name": "pt1", 00:17:18.006 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:18.006 "is_configured": true, 00:17:18.006 "data_offset": 256, 00:17:18.006 "data_size": 7936 00:17:18.006 }, 00:17:18.006 { 00:17:18.006 "name": null, 00:17:18.006 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:18.006 "is_configured": false, 00:17:18.006 "data_offset": 256, 00:17:18.006 "data_size": 7936 00:17:18.006 } 00:17:18.006 ] 00:17:18.006 }' 00:17:18.006 17:36:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:18.006 17:36:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:18.265 17:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:17:18.265 17:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:17:18.265 17:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:18.265 17:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:18.523 [2024-07-15 17:36:14.257786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:18.523 [2024-07-15 17:36:14.257841] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.523 [2024-07-15 17:36:14.257853] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x373d3f834f00 00:17:18.523 [2024-07-15 17:36:14.257861] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.523 [2024-07-15 17:36:14.257985] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.523 [2024-07-15 17:36:14.257996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:18.523 [2024-07-15 17:36:14.258018] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:18.523 [2024-07-15 17:36:14.258027] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:18.523 [2024-07-15 17:36:14.258054] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x373d3f835180 00:17:18.523 [2024-07-15 17:36:14.258058] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:18.523 [2024-07-15 17:36:14.258077] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x373d3f897e20 00:17:18.523 [2024-07-15 17:36:14.258131] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x373d3f835180 00:17:18.523 [2024-07-15 17:36:14.258135] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x373d3f835180 00:17:18.523 [2024-07-15 17:36:14.258157] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.523 pt2 00:17:18.523 17:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:18.523 17:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:18.523 17:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:18.523 17:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:18.523 17:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:18.523 17:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:18.523 17:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:18.523 17:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:18.523 17:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:18.523 17:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:18.523 17:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:18.523 17:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:18.523 17:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.523 17:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.781 17:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:18.781 "name": "raid_bdev1", 00:17:18.781 "uuid": "b8fc1d12-42d0-11ef-96ac-773515fba644", 00:17:18.781 "strip_size_kb": 0, 00:17:18.782 "state": "online", 00:17:18.782 "raid_level": "raid1", 00:17:18.782 "superblock": true, 00:17:18.782 "num_base_bdevs": 2, 00:17:18.782 "num_base_bdevs_discovered": 2, 00:17:18.782 "num_base_bdevs_operational": 2, 00:17:18.782 "base_bdevs_list": [ 00:17:18.782 { 00:17:18.782 "name": "pt1", 00:17:18.782 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:18.782 "is_configured": true, 00:17:18.782 "data_offset": 256, 00:17:18.782 "data_size": 7936 00:17:18.782 }, 00:17:18.782 { 00:17:18.782 "name": "pt2", 00:17:18.782 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:18.782 "is_configured": true, 00:17:18.782 "data_offset": 256, 00:17:18.782 "data_size": 7936 00:17:18.782 } 00:17:18.782 ] 00:17:18.782 }' 00:17:18.782 17:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:18.782 17:36:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:19.040 17:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:17:19.040 17:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:19.040 17:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:19.040 17:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:19.040 17:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:19.040 17:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:17:19.040 17:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:19.040 17:36:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:19.299 [2024-07-15 17:36:15.077832] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:19.299 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:19.299 "name": "raid_bdev1", 00:17:19.299 "aliases": [ 00:17:19.299 "b8fc1d12-42d0-11ef-96ac-773515fba644" 00:17:19.299 ], 00:17:19.299 "product_name": "Raid Volume", 00:17:19.299 "block_size": 4096, 00:17:19.299 "num_blocks": 7936, 00:17:19.299 "uuid": "b8fc1d12-42d0-11ef-96ac-773515fba644", 00:17:19.299 "assigned_rate_limits": { 00:17:19.299 "rw_ios_per_sec": 0, 00:17:19.299 "rw_mbytes_per_sec": 0, 00:17:19.299 "r_mbytes_per_sec": 0, 00:17:19.299 "w_mbytes_per_sec": 0 00:17:19.299 }, 00:17:19.299 "claimed": false, 00:17:19.299 "zoned": false, 00:17:19.299 "supported_io_types": { 00:17:19.299 "read": true, 00:17:19.299 "write": true, 00:17:19.299 "unmap": false, 00:17:19.299 "flush": false, 00:17:19.299 "reset": true, 00:17:19.299 "nvme_admin": false, 00:17:19.299 "nvme_io": false, 00:17:19.299 "nvme_io_md": false, 00:17:19.299 "write_zeroes": true, 00:17:19.299 "zcopy": false, 00:17:19.299 "get_zone_info": false, 00:17:19.299 "zone_management": false, 00:17:19.299 "zone_append": false, 00:17:19.299 "compare": false, 00:17:19.299 "compare_and_write": false, 00:17:19.299 "abort": false, 00:17:19.299 "seek_hole": false, 00:17:19.299 "seek_data": false, 00:17:19.299 "copy": false, 00:17:19.299 "nvme_iov_md": false 00:17:19.299 }, 00:17:19.299 "memory_domains": [ 00:17:19.299 { 00:17:19.299 "dma_device_id": "system", 00:17:19.299 "dma_device_type": 1 00:17:19.299 }, 00:17:19.299 { 00:17:19.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.299 "dma_device_type": 2 00:17:19.299 }, 00:17:19.299 { 00:17:19.299 "dma_device_id": "system", 00:17:19.299 "dma_device_type": 1 00:17:19.299 }, 00:17:19.299 { 00:17:19.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.299 "dma_device_type": 2 00:17:19.299 } 00:17:19.299 ], 00:17:19.299 "driver_specific": { 00:17:19.299 "raid": { 00:17:19.299 "uuid": "b8fc1d12-42d0-11ef-96ac-773515fba644", 00:17:19.299 "strip_size_kb": 0, 00:17:19.299 "state": "online", 00:17:19.299 "raid_level": "raid1", 00:17:19.299 "superblock": true, 00:17:19.299 "num_base_bdevs": 2, 00:17:19.299 "num_base_bdevs_discovered": 2, 00:17:19.299 "num_base_bdevs_operational": 2, 00:17:19.299 "base_bdevs_list": [ 00:17:19.299 { 00:17:19.299 "name": "pt1", 00:17:19.299 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:19.299 "is_configured": true, 00:17:19.299 "data_offset": 256, 00:17:19.299 "data_size": 7936 00:17:19.299 }, 00:17:19.299 { 00:17:19.299 "name": "pt2", 00:17:19.299 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:19.299 "is_configured": true, 00:17:19.299 "data_offset": 256, 00:17:19.299 "data_size": 7936 00:17:19.299 } 00:17:19.299 ] 00:17:19.299 } 00:17:19.299 } 00:17:19.299 }' 00:17:19.299 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:19.299 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:19.299 pt2' 00:17:19.299 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:19.299 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:19.299 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:19.866 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:19.867 "name": "pt1", 00:17:19.867 "aliases": [ 00:17:19.867 "00000000-0000-0000-0000-000000000001" 00:17:19.867 ], 00:17:19.867 "product_name": "passthru", 00:17:19.867 "block_size": 4096, 00:17:19.867 "num_blocks": 8192, 00:17:19.867 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:19.867 "assigned_rate_limits": { 00:17:19.867 "rw_ios_per_sec": 0, 00:17:19.867 "rw_mbytes_per_sec": 0, 00:17:19.867 "r_mbytes_per_sec": 0, 00:17:19.867 "w_mbytes_per_sec": 0 00:17:19.867 }, 00:17:19.867 "claimed": true, 00:17:19.867 "claim_type": "exclusive_write", 00:17:19.867 "zoned": false, 00:17:19.867 "supported_io_types": { 00:17:19.867 "read": true, 00:17:19.867 "write": true, 00:17:19.867 "unmap": true, 00:17:19.867 "flush": true, 00:17:19.867 "reset": true, 00:17:19.867 "nvme_admin": false, 00:17:19.867 "nvme_io": false, 00:17:19.867 "nvme_io_md": false, 00:17:19.867 "write_zeroes": true, 00:17:19.867 "zcopy": true, 00:17:19.867 "get_zone_info": false, 00:17:19.867 "zone_management": false, 00:17:19.867 "zone_append": false, 00:17:19.867 "compare": false, 00:17:19.867 "compare_and_write": false, 00:17:19.867 "abort": true, 00:17:19.867 "seek_hole": false, 00:17:19.867 "seek_data": false, 00:17:19.867 "copy": true, 00:17:19.867 "nvme_iov_md": false 00:17:19.867 }, 00:17:19.867 "memory_domains": [ 00:17:19.867 { 00:17:19.867 "dma_device_id": "system", 00:17:19.867 "dma_device_type": 1 00:17:19.867 }, 00:17:19.867 { 00:17:19.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:19.867 "dma_device_type": 2 00:17:19.867 } 00:17:19.867 ], 00:17:19.867 "driver_specific": { 00:17:19.867 "passthru": { 00:17:19.867 "name": "pt1", 00:17:19.867 "base_bdev_name": "malloc1" 00:17:19.867 } 00:17:19.867 } 00:17:19.867 }' 00:17:19.867 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:19.867 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:19.867 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:19.867 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:19.867 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:19.867 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:19.867 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:19.867 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:19.867 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:19.867 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:19.867 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:19.867 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:19.867 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:19.867 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:19.867 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:20.126 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:20.126 "name": "pt2", 00:17:20.126 "aliases": [ 00:17:20.126 "00000000-0000-0000-0000-000000000002" 00:17:20.126 ], 00:17:20.126 "product_name": "passthru", 00:17:20.126 "block_size": 4096, 00:17:20.126 "num_blocks": 8192, 00:17:20.126 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:20.126 "assigned_rate_limits": { 00:17:20.126 "rw_ios_per_sec": 0, 00:17:20.126 "rw_mbytes_per_sec": 0, 00:17:20.126 "r_mbytes_per_sec": 0, 00:17:20.126 "w_mbytes_per_sec": 0 00:17:20.126 }, 00:17:20.126 "claimed": true, 00:17:20.126 "claim_type": "exclusive_write", 00:17:20.126 "zoned": false, 00:17:20.126 "supported_io_types": { 00:17:20.126 "read": true, 00:17:20.126 "write": true, 00:17:20.126 "unmap": true, 00:17:20.126 "flush": true, 00:17:20.126 "reset": true, 00:17:20.126 "nvme_admin": false, 00:17:20.126 "nvme_io": false, 00:17:20.126 "nvme_io_md": false, 00:17:20.126 "write_zeroes": true, 00:17:20.126 "zcopy": true, 00:17:20.126 "get_zone_info": false, 00:17:20.126 "zone_management": false, 00:17:20.126 "zone_append": false, 00:17:20.126 "compare": false, 00:17:20.126 "compare_and_write": false, 00:17:20.126 "abort": true, 00:17:20.126 "seek_hole": false, 00:17:20.126 "seek_data": false, 00:17:20.126 "copy": true, 00:17:20.126 "nvme_iov_md": false 00:17:20.126 }, 00:17:20.126 "memory_domains": [ 00:17:20.126 { 00:17:20.126 "dma_device_id": "system", 00:17:20.126 "dma_device_type": 1 00:17:20.126 }, 00:17:20.126 { 00:17:20.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:20.126 "dma_device_type": 2 00:17:20.126 } 00:17:20.126 ], 00:17:20.126 "driver_specific": { 00:17:20.126 "passthru": { 00:17:20.126 "name": "pt2", 00:17:20.126 "base_bdev_name": "malloc2" 00:17:20.126 } 00:17:20.126 } 00:17:20.126 }' 00:17:20.126 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:20.126 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:20.126 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:20.126 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:20.126 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:20.126 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:20.126 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:20.126 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:20.126 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:20.126 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:20.126 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:20.126 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:20.126 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:20.126 17:36:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:17:20.394 [2024-07-15 17:36:16.117859] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:20.394 17:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # '[' b8fc1d12-42d0-11ef-96ac-773515fba644 '!=' b8fc1d12-42d0-11ef-96ac-773515fba644 ']' 00:17:20.394 17:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:17:20.394 17:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:20.394 17:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:17:20.394 17:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:20.655 [2024-07-15 17:36:16.409832] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:20.655 17:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:20.655 17:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:20.655 17:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:20.655 17:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:20.655 17:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:20.655 17:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:20.655 17:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:20.655 17:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:20.655 17:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:20.655 17:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:20.655 17:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:20.655 17:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.912 17:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:20.912 "name": "raid_bdev1", 00:17:20.912 "uuid": "b8fc1d12-42d0-11ef-96ac-773515fba644", 00:17:20.912 "strip_size_kb": 0, 00:17:20.912 "state": "online", 00:17:20.912 "raid_level": "raid1", 00:17:20.912 "superblock": true, 00:17:20.912 "num_base_bdevs": 2, 00:17:20.912 "num_base_bdevs_discovered": 1, 00:17:20.912 "num_base_bdevs_operational": 1, 00:17:20.912 "base_bdevs_list": [ 00:17:20.912 { 00:17:20.912 "name": null, 00:17:20.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:20.912 "is_configured": false, 00:17:20.912 "data_offset": 256, 00:17:20.912 "data_size": 7936 00:17:20.912 }, 00:17:20.912 { 00:17:20.912 "name": "pt2", 00:17:20.912 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:20.912 "is_configured": true, 00:17:20.912 "data_offset": 256, 00:17:20.912 "data_size": 7936 00:17:20.912 } 00:17:20.912 ] 00:17:20.912 }' 00:17:20.912 17:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:20.912 17:36:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:21.169 17:36:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:21.427 [2024-07-15 17:36:17.205919] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:21.427 [2024-07-15 17:36:17.205941] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:21.427 [2024-07-15 17:36:17.205977] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:21.427 [2024-07-15 17:36:17.205987] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:21.427 [2024-07-15 17:36:17.205991] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x373d3f835180 name raid_bdev1, state offline 00:17:21.427 17:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:17:21.427 17:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.685 17:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:17:21.685 17:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:17:21.685 17:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:17:21.685 17:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:21.685 17:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:21.942 17:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:17:21.942 17:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:21.942 17:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:17:21.942 17:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:17:21.942 17:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@518 -- # i=1 00:17:21.942 17:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:22.200 [2024-07-15 17:36:17.941968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:22.200 [2024-07-15 17:36:17.942050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.200 [2024-07-15 17:36:17.942062] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x373d3f834f00 00:17:22.200 [2024-07-15 17:36:17.942070] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.200 [2024-07-15 17:36:17.942731] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.200 [2024-07-15 17:36:17.942755] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:22.200 [2024-07-15 17:36:17.942780] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:22.200 [2024-07-15 17:36:17.942791] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:22.200 [2024-07-15 17:36:17.942816] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x373d3f835180 00:17:22.200 [2024-07-15 17:36:17.942821] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:22.200 [2024-07-15 17:36:17.942851] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x373d3f897e20 00:17:22.200 [2024-07-15 17:36:17.942909] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x373d3f835180 00:17:22.200 [2024-07-15 17:36:17.942917] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x373d3f835180 00:17:22.200 [2024-07-15 17:36:17.942940] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.200 pt2 00:17:22.200 17:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:22.200 17:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:22.200 17:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:22.200 17:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:22.200 17:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:22.200 17:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:22.200 17:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:22.200 17:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:22.200 17:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:22.200 17:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:22.200 17:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:22.200 17:36:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.458 17:36:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:22.458 "name": "raid_bdev1", 00:17:22.458 "uuid": "b8fc1d12-42d0-11ef-96ac-773515fba644", 00:17:22.458 "strip_size_kb": 0, 00:17:22.458 "state": "online", 00:17:22.458 "raid_level": "raid1", 00:17:22.458 "superblock": true, 00:17:22.458 "num_base_bdevs": 2, 00:17:22.458 "num_base_bdevs_discovered": 1, 00:17:22.458 "num_base_bdevs_operational": 1, 00:17:22.458 "base_bdevs_list": [ 00:17:22.458 { 00:17:22.458 "name": null, 00:17:22.458 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.458 "is_configured": false, 00:17:22.458 "data_offset": 256, 00:17:22.458 "data_size": 7936 00:17:22.459 }, 00:17:22.459 { 00:17:22.459 "name": "pt2", 00:17:22.459 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:22.459 "is_configured": true, 00:17:22.459 "data_offset": 256, 00:17:22.459 "data_size": 7936 00:17:22.459 } 00:17:22.459 ] 00:17:22.459 }' 00:17:22.459 17:36:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:22.459 17:36:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:23.027 17:36:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:23.307 [2024-07-15 17:36:18.866004] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:23.307 [2024-07-15 17:36:18.866027] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:23.307 [2024-07-15 17:36:18.866049] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:23.307 [2024-07-15 17:36:18.866061] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:23.307 [2024-07-15 17:36:18.866065] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x373d3f835180 name raid_bdev1, state offline 00:17:23.307 17:36:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.307 17:36:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:17:23.307 17:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:17:23.307 17:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:17:23.307 17:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:17:23.307 17:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:23.872 [2024-07-15 17:36:19.406012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:23.872 [2024-07-15 17:36:19.406077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.872 [2024-07-15 17:36:19.406105] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x373d3f834c80 00:17:23.872 [2024-07-15 17:36:19.406130] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.873 [2024-07-15 17:36:19.406765] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.873 [2024-07-15 17:36:19.406788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:23.873 [2024-07-15 17:36:19.406813] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:23.873 [2024-07-15 17:36:19.406825] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:23.873 [2024-07-15 17:36:19.406854] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:23.873 [2024-07-15 17:36:19.406859] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:23.873 [2024-07-15 17:36:19.406863] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x373d3f834780 name raid_bdev1, state configuring 00:17:23.873 [2024-07-15 17:36:19.406871] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:23.873 [2024-07-15 17:36:19.406886] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x373d3f834780 00:17:23.873 [2024-07-15 17:36:19.406890] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:23.873 [2024-07-15 17:36:19.406913] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x373d3f897e20 00:17:23.873 [2024-07-15 17:36:19.406975] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x373d3f834780 00:17:23.873 [2024-07-15 17:36:19.406987] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x373d3f834780 00:17:23.873 [2024-07-15 17:36:19.407010] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:23.873 pt1 00:17:23.873 17:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:17:23.873 17:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:23.873 17:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:23.873 17:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:23.873 17:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:23.873 17:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:23.873 17:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:23.873 17:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:23.873 17:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:23.873 17:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:23.873 17:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:23.873 17:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.873 17:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.873 17:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:23.873 "name": "raid_bdev1", 00:17:23.873 "uuid": "b8fc1d12-42d0-11ef-96ac-773515fba644", 00:17:23.873 "strip_size_kb": 0, 00:17:23.873 "state": "online", 00:17:23.873 "raid_level": "raid1", 00:17:23.873 "superblock": true, 00:17:23.873 "num_base_bdevs": 2, 00:17:23.873 "num_base_bdevs_discovered": 1, 00:17:23.873 "num_base_bdevs_operational": 1, 00:17:23.873 "base_bdevs_list": [ 00:17:23.873 { 00:17:23.873 "name": null, 00:17:23.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.873 "is_configured": false, 00:17:23.873 "data_offset": 256, 00:17:23.873 "data_size": 7936 00:17:23.873 }, 00:17:23.873 { 00:17:23.873 "name": "pt2", 00:17:23.873 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:23.873 "is_configured": true, 00:17:23.873 "data_offset": 256, 00:17:23.873 "data_size": 7936 00:17:23.873 } 00:17:23.873 ] 00:17:23.873 }' 00:17:23.873 17:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:23.873 17:36:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.130 17:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:17:24.130 17:36:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:24.698 17:36:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:17:24.698 17:36:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:24.698 17:36:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:17:24.698 [2024-07-15 17:36:20.494081] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:24.698 17:36:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' b8fc1d12-42d0-11ef-96ac-773515fba644 '!=' b8fc1d12-42d0-11ef-96ac-773515fba644 ']' 00:17:24.698 17:36:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@562 -- # killprocess 65816 00:17:24.698 17:36:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@948 -- # '[' -z 65816 ']' 00:17:24.698 17:36:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # kill -0 65816 00:17:24.698 17:36:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # uname 00:17:24.698 17:36:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:17:24.698 17:36:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps -c -o command 65816 00:17:24.698 17:36:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # tail -1 00:17:24.698 17:36:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:17:24.698 killing process with pid 65816 00:17:24.698 17:36:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:17:24.698 17:36:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65816' 00:17:24.698 17:36:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@967 -- # kill 65816 00:17:24.698 [2024-07-15 17:36:20.523650] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:24.698 [2024-07-15 17:36:20.523676] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:24.698 [2024-07-15 17:36:20.523687] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:24.698 [2024-07-15 17:36:20.523691] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x373d3f834780 name raid_bdev1, state offline 00:17:24.698 17:36:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # wait 65816 00:17:24.957 [2024-07-15 17:36:20.536639] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:24.957 17:36:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@564 -- # return 0 00:17:24.957 ************************************ 00:17:24.958 END TEST raid_superblock_test_4k 00:17:24.958 ************************************ 00:17:24.958 00:17:24.958 real 0m13.406s 00:17:24.958 user 0m23.964s 00:17:24.958 sys 0m2.083s 00:17:24.958 17:36:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:24.958 17:36:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:24.958 17:36:20 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:24.958 17:36:20 bdev_raid -- bdev/bdev_raid.sh@900 -- # '[' '' = true ']' 00:17:24.958 17:36:20 bdev_raid -- bdev/bdev_raid.sh@904 -- # base_malloc_params='-m 32' 00:17:24.958 17:36:20 bdev_raid -- bdev/bdev_raid.sh@905 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:17:24.958 17:36:20 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:24.958 17:36:20 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:24.958 17:36:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:24.958 ************************************ 00:17:24.958 START TEST raid_state_function_test_sb_md_separate 00:17:24.958 ************************************ 00:17:24.958 17:36:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:17:24.958 17:36:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:17:24.958 17:36:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:17:24.958 17:36:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:17:24.958 17:36:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:24.958 17:36:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:24.958 17:36:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:24.958 17:36:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:24.958 17:36:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:24.958 17:36:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:24.958 17:36:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:24.958 17:36:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:24.958 17:36:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:24.958 17:36:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:24.958 17:36:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:24.958 17:36:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:24.958 17:36:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:24.958 17:36:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:24.958 17:36:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:24.958 17:36:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:17:24.958 17:36:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:17:24.958 17:36:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:17:24.958 17:36:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:17:24.958 17:36:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=66203 00:17:24.958 Process raid pid: 66203 00:17:24.958 17:36:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 66203' 00:17:24.958 17:36:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 66203 /var/tmp/spdk-raid.sock 00:17:24.958 17:36:20 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:24.958 17:36:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@829 -- # '[' -z 66203 ']' 00:17:24.958 17:36:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:24.958 17:36:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:24.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:24.958 17:36:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:24.958 17:36:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:24.958 17:36:20 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:24.958 [2024-07-15 17:36:20.777751] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:17:24.958 [2024-07-15 17:36:20.777957] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:25.526 EAL: TSC is not safe to use in SMP mode 00:17:25.526 EAL: TSC is not invariant 00:17:25.526 [2024-07-15 17:36:21.312097] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.785 [2024-07-15 17:36:21.393889] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:25.785 [2024-07-15 17:36:21.396113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.785 [2024-07-15 17:36:21.396964] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:25.785 [2024-07-15 17:36:21.396984] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:26.045 17:36:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:26.045 17:36:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # return 0 00:17:26.045 17:36:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:26.303 [2024-07-15 17:36:22.083995] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:26.303 [2024-07-15 17:36:22.084039] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:26.303 [2024-07-15 17:36:22.084045] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:26.303 [2024-07-15 17:36:22.084054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:26.303 17:36:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:26.303 17:36:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:26.303 17:36:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:26.303 17:36:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:26.303 17:36:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:26.303 17:36:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:26.303 17:36:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:26.303 17:36:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:26.303 17:36:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:26.303 17:36:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:26.303 17:36:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:26.303 17:36:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:26.561 17:36:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:26.561 "name": "Existed_Raid", 00:17:26.561 "uuid": "c06dffd3-42d0-11ef-96ac-773515fba644", 00:17:26.561 "strip_size_kb": 0, 00:17:26.561 "state": "configuring", 00:17:26.561 "raid_level": "raid1", 00:17:26.561 "superblock": true, 00:17:26.561 "num_base_bdevs": 2, 00:17:26.561 "num_base_bdevs_discovered": 0, 00:17:26.561 "num_base_bdevs_operational": 2, 00:17:26.561 "base_bdevs_list": [ 00:17:26.561 { 00:17:26.561 "name": "BaseBdev1", 00:17:26.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.561 "is_configured": false, 00:17:26.561 "data_offset": 0, 00:17:26.561 "data_size": 0 00:17:26.561 }, 00:17:26.561 { 00:17:26.561 "name": "BaseBdev2", 00:17:26.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:26.561 "is_configured": false, 00:17:26.561 "data_offset": 0, 00:17:26.561 "data_size": 0 00:17:26.561 } 00:17:26.561 ] 00:17:26.561 }' 00:17:26.561 17:36:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:26.562 17:36:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:27.127 17:36:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:27.127 [2024-07-15 17:36:22.895982] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:27.127 [2024-07-15 17:36:22.896011] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x32d76da34500 name Existed_Raid, state configuring 00:17:27.127 17:36:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:27.386 [2024-07-15 17:36:23.139994] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:27.386 [2024-07-15 17:36:23.140038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:27.386 [2024-07-15 17:36:23.140043] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:27.386 [2024-07-15 17:36:23.140052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:27.386 17:36:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:17:27.644 [2024-07-15 17:36:23.385069] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:27.644 BaseBdev1 00:17:27.644 17:36:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:27.644 17:36:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:27.644 17:36:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:27.644 17:36:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:17:27.644 17:36:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:27.644 17:36:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:27.644 17:36:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:27.902 17:36:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:28.161 [ 00:17:28.161 { 00:17:28.161 "name": "BaseBdev1", 00:17:28.161 "aliases": [ 00:17:28.161 "c1345d65-42d0-11ef-96ac-773515fba644" 00:17:28.161 ], 00:17:28.161 "product_name": "Malloc disk", 00:17:28.161 "block_size": 4096, 00:17:28.161 "num_blocks": 8192, 00:17:28.161 "uuid": "c1345d65-42d0-11ef-96ac-773515fba644", 00:17:28.161 "md_size": 32, 00:17:28.161 "md_interleave": false, 00:17:28.161 "dif_type": 0, 00:17:28.161 "assigned_rate_limits": { 00:17:28.161 "rw_ios_per_sec": 0, 00:17:28.161 "rw_mbytes_per_sec": 0, 00:17:28.161 "r_mbytes_per_sec": 0, 00:17:28.161 "w_mbytes_per_sec": 0 00:17:28.161 }, 00:17:28.161 "claimed": true, 00:17:28.161 "claim_type": "exclusive_write", 00:17:28.161 "zoned": false, 00:17:28.161 "supported_io_types": { 00:17:28.161 "read": true, 00:17:28.161 "write": true, 00:17:28.161 "unmap": true, 00:17:28.161 "flush": true, 00:17:28.161 "reset": true, 00:17:28.161 "nvme_admin": false, 00:17:28.161 "nvme_io": false, 00:17:28.161 "nvme_io_md": false, 00:17:28.161 "write_zeroes": true, 00:17:28.161 "zcopy": true, 00:17:28.161 "get_zone_info": false, 00:17:28.161 "zone_management": false, 00:17:28.161 "zone_append": false, 00:17:28.161 "compare": false, 00:17:28.161 "compare_and_write": false, 00:17:28.161 "abort": true, 00:17:28.161 "seek_hole": false, 00:17:28.161 "seek_data": false, 00:17:28.161 "copy": true, 00:17:28.161 "nvme_iov_md": false 00:17:28.161 }, 00:17:28.161 "memory_domains": [ 00:17:28.161 { 00:17:28.161 "dma_device_id": "system", 00:17:28.161 "dma_device_type": 1 00:17:28.161 }, 00:17:28.161 { 00:17:28.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:28.161 "dma_device_type": 2 00:17:28.161 } 00:17:28.161 ], 00:17:28.161 "driver_specific": {} 00:17:28.161 } 00:17:28.161 ] 00:17:28.161 17:36:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:17:28.161 17:36:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:28.161 17:36:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:28.161 17:36:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:28.161 17:36:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:28.161 17:36:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:28.161 17:36:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:28.161 17:36:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:28.161 17:36:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:28.161 17:36:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:28.161 17:36:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:28.161 17:36:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:28.161 17:36:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.419 17:36:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:28.419 "name": "Existed_Raid", 00:17:28.419 "uuid": "c10f21cc-42d0-11ef-96ac-773515fba644", 00:17:28.419 "strip_size_kb": 0, 00:17:28.419 "state": "configuring", 00:17:28.419 "raid_level": "raid1", 00:17:28.419 "superblock": true, 00:17:28.419 "num_base_bdevs": 2, 00:17:28.419 "num_base_bdevs_discovered": 1, 00:17:28.419 "num_base_bdevs_operational": 2, 00:17:28.419 "base_bdevs_list": [ 00:17:28.419 { 00:17:28.419 "name": "BaseBdev1", 00:17:28.419 "uuid": "c1345d65-42d0-11ef-96ac-773515fba644", 00:17:28.419 "is_configured": true, 00:17:28.419 "data_offset": 256, 00:17:28.419 "data_size": 7936 00:17:28.419 }, 00:17:28.419 { 00:17:28.419 "name": "BaseBdev2", 00:17:28.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:28.419 "is_configured": false, 00:17:28.419 "data_offset": 0, 00:17:28.419 "data_size": 0 00:17:28.419 } 00:17:28.419 ] 00:17:28.419 }' 00:17:28.419 17:36:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:28.419 17:36:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.986 17:36:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:28.986 [2024-07-15 17:36:24.816038] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:28.986 [2024-07-15 17:36:24.816087] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x32d76da34500 name Existed_Raid, state configuring 00:17:29.245 17:36:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:29.502 [2024-07-15 17:36:25.120071] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:29.502 [2024-07-15 17:36:25.120882] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:29.502 [2024-07-15 17:36:25.120922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:29.502 17:36:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:29.502 17:36:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:29.502 17:36:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:29.502 17:36:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:29.502 17:36:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:29.502 17:36:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:29.502 17:36:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:29.502 17:36:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:29.502 17:36:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:29.502 17:36:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:29.502 17:36:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:29.502 17:36:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:29.502 17:36:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.502 17:36:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:29.760 17:36:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:29.760 "name": "Existed_Raid", 00:17:29.760 "uuid": "c23d4467-42d0-11ef-96ac-773515fba644", 00:17:29.760 "strip_size_kb": 0, 00:17:29.760 "state": "configuring", 00:17:29.760 "raid_level": "raid1", 00:17:29.760 "superblock": true, 00:17:29.760 "num_base_bdevs": 2, 00:17:29.760 "num_base_bdevs_discovered": 1, 00:17:29.760 "num_base_bdevs_operational": 2, 00:17:29.760 "base_bdevs_list": [ 00:17:29.760 { 00:17:29.760 "name": "BaseBdev1", 00:17:29.760 "uuid": "c1345d65-42d0-11ef-96ac-773515fba644", 00:17:29.760 "is_configured": true, 00:17:29.760 "data_offset": 256, 00:17:29.760 "data_size": 7936 00:17:29.760 }, 00:17:29.760 { 00:17:29.760 "name": "BaseBdev2", 00:17:29.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.760 "is_configured": false, 00:17:29.760 "data_offset": 0, 00:17:29.760 "data_size": 0 00:17:29.760 } 00:17:29.760 ] 00:17:29.760 }' 00:17:29.760 17:36:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:29.760 17:36:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.019 17:36:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:17:30.277 [2024-07-15 17:36:26.028410] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:30.277 [2024-07-15 17:36:26.028513] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x32d76da34a00 00:17:30.277 [2024-07-15 17:36:26.028539] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:30.277 [2024-07-15 17:36:26.028600] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x32d76da97e20 00:17:30.277 [2024-07-15 17:36:26.028645] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x32d76da34a00 00:17:30.277 [2024-07-15 17:36:26.028652] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x32d76da34a00 00:17:30.277 [2024-07-15 17:36:26.028681] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.277 BaseBdev2 00:17:30.277 17:36:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:30.277 17:36:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:30.277 17:36:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:30.277 17:36:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:17:30.277 17:36:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:30.277 17:36:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:30.277 17:36:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:30.585 17:36:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:30.844 [ 00:17:30.844 { 00:17:30.844 "name": "BaseBdev2", 00:17:30.844 "aliases": [ 00:17:30.844 "c2c7d9c3-42d0-11ef-96ac-773515fba644" 00:17:30.844 ], 00:17:30.844 "product_name": "Malloc disk", 00:17:30.844 "block_size": 4096, 00:17:30.844 "num_blocks": 8192, 00:17:30.844 "uuid": "c2c7d9c3-42d0-11ef-96ac-773515fba644", 00:17:30.844 "md_size": 32, 00:17:30.844 "md_interleave": false, 00:17:30.844 "dif_type": 0, 00:17:30.844 "assigned_rate_limits": { 00:17:30.844 "rw_ios_per_sec": 0, 00:17:30.844 "rw_mbytes_per_sec": 0, 00:17:30.844 "r_mbytes_per_sec": 0, 00:17:30.844 "w_mbytes_per_sec": 0 00:17:30.844 }, 00:17:30.844 "claimed": true, 00:17:30.844 "claim_type": "exclusive_write", 00:17:30.844 "zoned": false, 00:17:30.844 "supported_io_types": { 00:17:30.844 "read": true, 00:17:30.844 "write": true, 00:17:30.844 "unmap": true, 00:17:30.844 "flush": true, 00:17:30.844 "reset": true, 00:17:30.844 "nvme_admin": false, 00:17:30.844 "nvme_io": false, 00:17:30.844 "nvme_io_md": false, 00:17:30.844 "write_zeroes": true, 00:17:30.844 "zcopy": true, 00:17:30.844 "get_zone_info": false, 00:17:30.844 "zone_management": false, 00:17:30.844 "zone_append": false, 00:17:30.844 "compare": false, 00:17:30.844 "compare_and_write": false, 00:17:30.844 "abort": true, 00:17:30.844 "seek_hole": false, 00:17:30.844 "seek_data": false, 00:17:30.844 "copy": true, 00:17:30.844 "nvme_iov_md": false 00:17:30.844 }, 00:17:30.844 "memory_domains": [ 00:17:30.844 { 00:17:30.844 "dma_device_id": "system", 00:17:30.844 "dma_device_type": 1 00:17:30.844 }, 00:17:30.844 { 00:17:30.844 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:30.844 "dma_device_type": 2 00:17:30.844 } 00:17:30.844 ], 00:17:30.844 "driver_specific": {} 00:17:30.844 } 00:17:30.844 ] 00:17:30.844 17:36:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:17:30.844 17:36:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:30.844 17:36:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:30.844 17:36:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:30.844 17:36:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:30.844 17:36:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:30.844 17:36:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:30.844 17:36:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:30.844 17:36:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:30.844 17:36:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:30.844 17:36:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:30.844 17:36:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:30.844 17:36:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:30.844 17:36:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.844 17:36:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:31.102 17:36:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:31.102 "name": "Existed_Raid", 00:17:31.102 "uuid": "c23d4467-42d0-11ef-96ac-773515fba644", 00:17:31.102 "strip_size_kb": 0, 00:17:31.102 "state": "online", 00:17:31.102 "raid_level": "raid1", 00:17:31.102 "superblock": true, 00:17:31.102 "num_base_bdevs": 2, 00:17:31.102 "num_base_bdevs_discovered": 2, 00:17:31.102 "num_base_bdevs_operational": 2, 00:17:31.102 "base_bdevs_list": [ 00:17:31.102 { 00:17:31.102 "name": "BaseBdev1", 00:17:31.102 "uuid": "c1345d65-42d0-11ef-96ac-773515fba644", 00:17:31.102 "is_configured": true, 00:17:31.102 "data_offset": 256, 00:17:31.102 "data_size": 7936 00:17:31.102 }, 00:17:31.102 { 00:17:31.102 "name": "BaseBdev2", 00:17:31.102 "uuid": "c2c7d9c3-42d0-11ef-96ac-773515fba644", 00:17:31.102 "is_configured": true, 00:17:31.102 "data_offset": 256, 00:17:31.102 "data_size": 7936 00:17:31.102 } 00:17:31.102 ] 00:17:31.102 }' 00:17:31.102 17:36:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:31.102 17:36:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.361 17:36:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:31.361 17:36:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:31.361 17:36:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:31.361 17:36:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:31.361 17:36:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:31.361 17:36:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:17:31.361 17:36:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:31.361 17:36:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:31.619 [2024-07-15 17:36:27.424423] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:31.619 17:36:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:31.619 "name": "Existed_Raid", 00:17:31.619 "aliases": [ 00:17:31.619 "c23d4467-42d0-11ef-96ac-773515fba644" 00:17:31.619 ], 00:17:31.619 "product_name": "Raid Volume", 00:17:31.619 "block_size": 4096, 00:17:31.619 "num_blocks": 7936, 00:17:31.619 "uuid": "c23d4467-42d0-11ef-96ac-773515fba644", 00:17:31.619 "md_size": 32, 00:17:31.619 "md_interleave": false, 00:17:31.619 "dif_type": 0, 00:17:31.619 "assigned_rate_limits": { 00:17:31.619 "rw_ios_per_sec": 0, 00:17:31.619 "rw_mbytes_per_sec": 0, 00:17:31.619 "r_mbytes_per_sec": 0, 00:17:31.619 "w_mbytes_per_sec": 0 00:17:31.619 }, 00:17:31.619 "claimed": false, 00:17:31.619 "zoned": false, 00:17:31.619 "supported_io_types": { 00:17:31.619 "read": true, 00:17:31.619 "write": true, 00:17:31.619 "unmap": false, 00:17:31.619 "flush": false, 00:17:31.619 "reset": true, 00:17:31.619 "nvme_admin": false, 00:17:31.619 "nvme_io": false, 00:17:31.619 "nvme_io_md": false, 00:17:31.619 "write_zeroes": true, 00:17:31.619 "zcopy": false, 00:17:31.619 "get_zone_info": false, 00:17:31.619 "zone_management": false, 00:17:31.619 "zone_append": false, 00:17:31.619 "compare": false, 00:17:31.619 "compare_and_write": false, 00:17:31.619 "abort": false, 00:17:31.619 "seek_hole": false, 00:17:31.619 "seek_data": false, 00:17:31.619 "copy": false, 00:17:31.619 "nvme_iov_md": false 00:17:31.619 }, 00:17:31.620 "memory_domains": [ 00:17:31.620 { 00:17:31.620 "dma_device_id": "system", 00:17:31.620 "dma_device_type": 1 00:17:31.620 }, 00:17:31.620 { 00:17:31.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.620 "dma_device_type": 2 00:17:31.620 }, 00:17:31.620 { 00:17:31.620 "dma_device_id": "system", 00:17:31.620 "dma_device_type": 1 00:17:31.620 }, 00:17:31.620 { 00:17:31.620 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.620 "dma_device_type": 2 00:17:31.620 } 00:17:31.620 ], 00:17:31.620 "driver_specific": { 00:17:31.620 "raid": { 00:17:31.620 "uuid": "c23d4467-42d0-11ef-96ac-773515fba644", 00:17:31.620 "strip_size_kb": 0, 00:17:31.620 "state": "online", 00:17:31.620 "raid_level": "raid1", 00:17:31.620 "superblock": true, 00:17:31.620 "num_base_bdevs": 2, 00:17:31.620 "num_base_bdevs_discovered": 2, 00:17:31.620 "num_base_bdevs_operational": 2, 00:17:31.620 "base_bdevs_list": [ 00:17:31.620 { 00:17:31.620 "name": "BaseBdev1", 00:17:31.620 "uuid": "c1345d65-42d0-11ef-96ac-773515fba644", 00:17:31.620 "is_configured": true, 00:17:31.620 "data_offset": 256, 00:17:31.620 "data_size": 7936 00:17:31.620 }, 00:17:31.620 { 00:17:31.620 "name": "BaseBdev2", 00:17:31.620 "uuid": "c2c7d9c3-42d0-11ef-96ac-773515fba644", 00:17:31.620 "is_configured": true, 00:17:31.620 "data_offset": 256, 00:17:31.620 "data_size": 7936 00:17:31.620 } 00:17:31.620 ] 00:17:31.620 } 00:17:31.620 } 00:17:31.620 }' 00:17:31.620 17:36:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:31.878 17:36:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:31.878 BaseBdev2' 00:17:31.878 17:36:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:31.878 17:36:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:31.879 17:36:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:31.879 17:36:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:31.879 "name": "BaseBdev1", 00:17:31.879 "aliases": [ 00:17:31.879 "c1345d65-42d0-11ef-96ac-773515fba644" 00:17:31.879 ], 00:17:31.879 "product_name": "Malloc disk", 00:17:31.879 "block_size": 4096, 00:17:31.879 "num_blocks": 8192, 00:17:31.879 "uuid": "c1345d65-42d0-11ef-96ac-773515fba644", 00:17:31.879 "md_size": 32, 00:17:31.879 "md_interleave": false, 00:17:31.879 "dif_type": 0, 00:17:31.879 "assigned_rate_limits": { 00:17:31.879 "rw_ios_per_sec": 0, 00:17:31.879 "rw_mbytes_per_sec": 0, 00:17:31.879 "r_mbytes_per_sec": 0, 00:17:31.879 "w_mbytes_per_sec": 0 00:17:31.879 }, 00:17:31.879 "claimed": true, 00:17:31.879 "claim_type": "exclusive_write", 00:17:31.879 "zoned": false, 00:17:31.879 "supported_io_types": { 00:17:31.879 "read": true, 00:17:31.879 "write": true, 00:17:31.879 "unmap": true, 00:17:31.879 "flush": true, 00:17:31.879 "reset": true, 00:17:31.879 "nvme_admin": false, 00:17:31.879 "nvme_io": false, 00:17:31.879 "nvme_io_md": false, 00:17:31.879 "write_zeroes": true, 00:17:31.879 "zcopy": true, 00:17:31.879 "get_zone_info": false, 00:17:31.879 "zone_management": false, 00:17:31.879 "zone_append": false, 00:17:31.879 "compare": false, 00:17:31.879 "compare_and_write": false, 00:17:31.879 "abort": true, 00:17:31.879 "seek_hole": false, 00:17:31.879 "seek_data": false, 00:17:31.879 "copy": true, 00:17:31.879 "nvme_iov_md": false 00:17:31.879 }, 00:17:31.879 "memory_domains": [ 00:17:31.879 { 00:17:31.879 "dma_device_id": "system", 00:17:31.879 "dma_device_type": 1 00:17:31.879 }, 00:17:31.879 { 00:17:31.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:31.879 "dma_device_type": 2 00:17:31.879 } 00:17:31.879 ], 00:17:31.879 "driver_specific": {} 00:17:31.879 }' 00:17:31.879 17:36:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:32.137 17:36:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:32.138 17:36:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:32.138 17:36:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:32.138 17:36:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:32.138 17:36:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:32.138 17:36:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:32.138 17:36:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:32.138 17:36:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:17:32.138 17:36:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:32.138 17:36:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:32.138 17:36:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:32.138 17:36:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:32.138 17:36:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:32.138 17:36:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:32.396 17:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:32.396 "name": "BaseBdev2", 00:17:32.396 "aliases": [ 00:17:32.396 "c2c7d9c3-42d0-11ef-96ac-773515fba644" 00:17:32.396 ], 00:17:32.396 "product_name": "Malloc disk", 00:17:32.396 "block_size": 4096, 00:17:32.396 "num_blocks": 8192, 00:17:32.396 "uuid": "c2c7d9c3-42d0-11ef-96ac-773515fba644", 00:17:32.396 "md_size": 32, 00:17:32.396 "md_interleave": false, 00:17:32.396 "dif_type": 0, 00:17:32.396 "assigned_rate_limits": { 00:17:32.396 "rw_ios_per_sec": 0, 00:17:32.396 "rw_mbytes_per_sec": 0, 00:17:32.396 "r_mbytes_per_sec": 0, 00:17:32.396 "w_mbytes_per_sec": 0 00:17:32.396 }, 00:17:32.396 "claimed": true, 00:17:32.396 "claim_type": "exclusive_write", 00:17:32.396 "zoned": false, 00:17:32.396 "supported_io_types": { 00:17:32.396 "read": true, 00:17:32.396 "write": true, 00:17:32.396 "unmap": true, 00:17:32.396 "flush": true, 00:17:32.396 "reset": true, 00:17:32.396 "nvme_admin": false, 00:17:32.396 "nvme_io": false, 00:17:32.396 "nvme_io_md": false, 00:17:32.396 "write_zeroes": true, 00:17:32.396 "zcopy": true, 00:17:32.396 "get_zone_info": false, 00:17:32.396 "zone_management": false, 00:17:32.396 "zone_append": false, 00:17:32.396 "compare": false, 00:17:32.396 "compare_and_write": false, 00:17:32.396 "abort": true, 00:17:32.396 "seek_hole": false, 00:17:32.396 "seek_data": false, 00:17:32.396 "copy": true, 00:17:32.396 "nvme_iov_md": false 00:17:32.396 }, 00:17:32.396 "memory_domains": [ 00:17:32.396 { 00:17:32.396 "dma_device_id": "system", 00:17:32.396 "dma_device_type": 1 00:17:32.396 }, 00:17:32.396 { 00:17:32.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:32.396 "dma_device_type": 2 00:17:32.396 } 00:17:32.396 ], 00:17:32.396 "driver_specific": {} 00:17:32.396 }' 00:17:32.396 17:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:32.396 17:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:32.396 17:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:32.396 17:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:32.396 17:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:32.396 17:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:32.396 17:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:32.396 17:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:32.396 17:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:17:32.396 17:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:32.396 17:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:32.396 17:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:32.396 17:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:32.654 [2024-07-15 17:36:28.368465] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:32.654 17:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:32.654 17:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:17:32.654 17:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:32.654 17:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:17:32.654 17:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:17:32.654 17:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:32.654 17:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:32.654 17:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:32.654 17:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:32.654 17:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:32.654 17:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:32.654 17:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:32.654 17:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:32.654 17:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:32.654 17:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:32.654 17:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.654 17:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.912 17:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:32.912 "name": "Existed_Raid", 00:17:32.912 "uuid": "c23d4467-42d0-11ef-96ac-773515fba644", 00:17:32.912 "strip_size_kb": 0, 00:17:32.912 "state": "online", 00:17:32.912 "raid_level": "raid1", 00:17:32.912 "superblock": true, 00:17:32.912 "num_base_bdevs": 2, 00:17:32.912 "num_base_bdevs_discovered": 1, 00:17:32.912 "num_base_bdevs_operational": 1, 00:17:32.912 "base_bdevs_list": [ 00:17:32.912 { 00:17:32.912 "name": null, 00:17:32.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.912 "is_configured": false, 00:17:32.912 "data_offset": 256, 00:17:32.912 "data_size": 7936 00:17:32.912 }, 00:17:32.912 { 00:17:32.912 "name": "BaseBdev2", 00:17:32.912 "uuid": "c2c7d9c3-42d0-11ef-96ac-773515fba644", 00:17:32.912 "is_configured": true, 00:17:32.912 "data_offset": 256, 00:17:32.912 "data_size": 7936 00:17:32.912 } 00:17:32.912 ] 00:17:32.912 }' 00:17:32.912 17:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:32.912 17:36:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:33.479 17:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:33.479 17:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:33.479 17:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:33.479 17:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:33.479 17:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:33.479 17:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:33.479 17:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:33.738 [2024-07-15 17:36:29.538810] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:33.738 [2024-07-15 17:36:29.538872] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:33.738 [2024-07-15 17:36:29.545344] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:33.738 [2024-07-15 17:36:29.545359] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:33.738 [2024-07-15 17:36:29.545379] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x32d76da34a00 name Existed_Raid, state offline 00:17:33.738 17:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:33.738 17:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:33.738 17:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:33.738 17:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.304 17:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:34.304 17:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:34.304 17:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:17:34.304 17:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 66203 00:17:34.304 17:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@948 -- # '[' -z 66203 ']' 00:17:34.304 17:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # kill -0 66203 00:17:34.304 17:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # uname 00:17:34.304 17:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:17:34.304 17:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps -c -o command 66203 00:17:34.304 17:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # tail -1 00:17:34.304 17:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:17:34.304 17:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:17:34.304 killing process with pid 66203 00:17:34.304 17:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66203' 00:17:34.304 17:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@967 -- # kill 66203 00:17:34.304 [2024-07-15 17:36:29.869393] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:34.304 [2024-07-15 17:36:29.869428] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:34.304 17:36:29 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # wait 66203 00:17:34.304 17:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:17:34.304 00:17:34.304 real 0m9.278s 00:17:34.304 user 0m16.102s 00:17:34.304 sys 0m1.701s 00:17:34.304 17:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:34.304 ************************************ 00:17:34.304 END TEST raid_state_function_test_sb_md_separate 00:17:34.304 ************************************ 00:17:34.304 17:36:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.304 17:36:30 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:34.305 17:36:30 bdev_raid -- bdev/bdev_raid.sh@906 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:17:34.305 17:36:30 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:34.305 17:36:30 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:34.305 17:36:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:34.305 ************************************ 00:17:34.305 START TEST raid_superblock_test_md_separate 00:17:34.305 ************************************ 00:17:34.305 17:36:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:17:34.305 17:36:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:17:34.305 17:36:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:17:34.305 17:36:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:17:34.305 17:36:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:17:34.305 17:36:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:17:34.305 17:36:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:17:34.305 17:36:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:17:34.305 17:36:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:17:34.305 17:36:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:17:34.305 17:36:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local strip_size 00:17:34.305 17:36:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:17:34.305 17:36:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:17:34.305 17:36:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:17:34.305 17:36:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:17:34.305 17:36:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:17:34.305 17:36:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # raid_pid=66477 00:17:34.305 17:36:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # waitforlisten 66477 /var/tmp/spdk-raid.sock 00:17:34.305 17:36:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:34.305 17:36:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@829 -- # '[' -z 66477 ']' 00:17:34.305 17:36:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:34.305 17:36:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:34.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:34.305 17:36:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:34.305 17:36:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:34.305 17:36:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:34.305 [2024-07-15 17:36:30.096515] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:17:34.305 [2024-07-15 17:36:30.096726] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:34.870 EAL: TSC is not safe to use in SMP mode 00:17:34.871 EAL: TSC is not invariant 00:17:34.871 [2024-07-15 17:36:30.645024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.129 [2024-07-15 17:36:30.733007] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:35.129 [2024-07-15 17:36:30.735071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.129 [2024-07-15 17:36:30.735816] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:35.129 [2024-07-15 17:36:30.735829] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:35.387 17:36:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:35.387 17:36:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # return 0 00:17:35.387 17:36:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:17:35.387 17:36:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:35.387 17:36:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:17:35.387 17:36:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:17:35.387 17:36:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:35.387 17:36:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:35.387 17:36:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:35.387 17:36:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:35.387 17:36:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:17:35.644 malloc1 00:17:35.644 17:36:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:35.902 [2024-07-15 17:36:31.683922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:35.902 [2024-07-15 17:36:31.684002] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:35.902 [2024-07-15 17:36:31.684015] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3d722b434780 00:17:35.902 [2024-07-15 17:36:31.684023] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:35.902 [2024-07-15 17:36:31.684894] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:35.902 [2024-07-15 17:36:31.684925] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:35.902 pt1 00:17:35.902 17:36:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:35.902 17:36:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:35.902 17:36:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:17:35.902 17:36:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:17:35.902 17:36:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:35.902 17:36:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:35.902 17:36:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:35.902 17:36:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:35.902 17:36:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:17:36.159 malloc2 00:17:36.159 17:36:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:36.418 [2024-07-15 17:36:32.159928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:36.418 [2024-07-15 17:36:32.159987] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:36.418 [2024-07-15 17:36:32.160000] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3d722b434c80 00:17:36.418 [2024-07-15 17:36:32.160009] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:36.418 [2024-07-15 17:36:32.160639] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:36.418 [2024-07-15 17:36:32.160664] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:36.418 pt2 00:17:36.418 17:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:36.418 17:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:36.418 17:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:17:36.676 [2024-07-15 17:36:32.395938] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:36.676 [2024-07-15 17:36:32.396507] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:36.676 [2024-07-15 17:36:32.396582] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3d722b434f00 00:17:36.676 [2024-07-15 17:36:32.396589] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:36.676 [2024-07-15 17:36:32.396628] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3d722b497e20 00:17:36.676 [2024-07-15 17:36:32.396659] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3d722b434f00 00:17:36.676 [2024-07-15 17:36:32.396663] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3d722b434f00 00:17:36.676 [2024-07-15 17:36:32.396679] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:36.676 17:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:36.676 17:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:36.676 17:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:36.676 17:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:36.676 17:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:36.676 17:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:36.676 17:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:36.676 17:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:36.676 17:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:36.676 17:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:36.676 17:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.676 17:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.934 17:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:36.934 "name": "raid_bdev1", 00:17:36.934 "uuid": "c69379e1-42d0-11ef-96ac-773515fba644", 00:17:36.934 "strip_size_kb": 0, 00:17:36.934 "state": "online", 00:17:36.934 "raid_level": "raid1", 00:17:36.934 "superblock": true, 00:17:36.934 "num_base_bdevs": 2, 00:17:36.934 "num_base_bdevs_discovered": 2, 00:17:36.934 "num_base_bdevs_operational": 2, 00:17:36.934 "base_bdevs_list": [ 00:17:36.934 { 00:17:36.934 "name": "pt1", 00:17:36.934 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:36.934 "is_configured": true, 00:17:36.934 "data_offset": 256, 00:17:36.934 "data_size": 7936 00:17:36.934 }, 00:17:36.934 { 00:17:36.934 "name": "pt2", 00:17:36.934 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:36.934 "is_configured": true, 00:17:36.934 "data_offset": 256, 00:17:36.934 "data_size": 7936 00:17:36.934 } 00:17:36.934 ] 00:17:36.934 }' 00:17:36.934 17:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:36.934 17:36:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:37.191 17:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:17:37.191 17:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:37.191 17:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:37.191 17:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:37.191 17:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:37.191 17:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:17:37.191 17:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:37.191 17:36:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:37.449 [2024-07-15 17:36:33.203985] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:37.449 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:37.449 "name": "raid_bdev1", 00:17:37.449 "aliases": [ 00:17:37.449 "c69379e1-42d0-11ef-96ac-773515fba644" 00:17:37.449 ], 00:17:37.449 "product_name": "Raid Volume", 00:17:37.449 "block_size": 4096, 00:17:37.449 "num_blocks": 7936, 00:17:37.449 "uuid": "c69379e1-42d0-11ef-96ac-773515fba644", 00:17:37.449 "md_size": 32, 00:17:37.449 "md_interleave": false, 00:17:37.449 "dif_type": 0, 00:17:37.449 "assigned_rate_limits": { 00:17:37.449 "rw_ios_per_sec": 0, 00:17:37.449 "rw_mbytes_per_sec": 0, 00:17:37.449 "r_mbytes_per_sec": 0, 00:17:37.449 "w_mbytes_per_sec": 0 00:17:37.449 }, 00:17:37.449 "claimed": false, 00:17:37.449 "zoned": false, 00:17:37.449 "supported_io_types": { 00:17:37.449 "read": true, 00:17:37.449 "write": true, 00:17:37.449 "unmap": false, 00:17:37.449 "flush": false, 00:17:37.449 "reset": true, 00:17:37.449 "nvme_admin": false, 00:17:37.449 "nvme_io": false, 00:17:37.449 "nvme_io_md": false, 00:17:37.449 "write_zeroes": true, 00:17:37.449 "zcopy": false, 00:17:37.449 "get_zone_info": false, 00:17:37.449 "zone_management": false, 00:17:37.449 "zone_append": false, 00:17:37.449 "compare": false, 00:17:37.449 "compare_and_write": false, 00:17:37.449 "abort": false, 00:17:37.449 "seek_hole": false, 00:17:37.449 "seek_data": false, 00:17:37.449 "copy": false, 00:17:37.449 "nvme_iov_md": false 00:17:37.449 }, 00:17:37.449 "memory_domains": [ 00:17:37.449 { 00:17:37.449 "dma_device_id": "system", 00:17:37.449 "dma_device_type": 1 00:17:37.449 }, 00:17:37.449 { 00:17:37.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.449 "dma_device_type": 2 00:17:37.449 }, 00:17:37.449 { 00:17:37.449 "dma_device_id": "system", 00:17:37.449 "dma_device_type": 1 00:17:37.449 }, 00:17:37.449 { 00:17:37.449 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.449 "dma_device_type": 2 00:17:37.449 } 00:17:37.449 ], 00:17:37.449 "driver_specific": { 00:17:37.449 "raid": { 00:17:37.449 "uuid": "c69379e1-42d0-11ef-96ac-773515fba644", 00:17:37.449 "strip_size_kb": 0, 00:17:37.449 "state": "online", 00:17:37.449 "raid_level": "raid1", 00:17:37.449 "superblock": true, 00:17:37.449 "num_base_bdevs": 2, 00:17:37.449 "num_base_bdevs_discovered": 2, 00:17:37.449 "num_base_bdevs_operational": 2, 00:17:37.449 "base_bdevs_list": [ 00:17:37.449 { 00:17:37.449 "name": "pt1", 00:17:37.449 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:37.449 "is_configured": true, 00:17:37.449 "data_offset": 256, 00:17:37.449 "data_size": 7936 00:17:37.449 }, 00:17:37.449 { 00:17:37.449 "name": "pt2", 00:17:37.449 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:37.449 "is_configured": true, 00:17:37.449 "data_offset": 256, 00:17:37.449 "data_size": 7936 00:17:37.449 } 00:17:37.449 ] 00:17:37.449 } 00:17:37.449 } 00:17:37.449 }' 00:17:37.449 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:37.449 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:37.449 pt2' 00:17:37.449 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:37.449 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:37.449 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:37.707 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:37.707 "name": "pt1", 00:17:37.707 "aliases": [ 00:17:37.707 "00000000-0000-0000-0000-000000000001" 00:17:37.707 ], 00:17:37.707 "product_name": "passthru", 00:17:37.707 "block_size": 4096, 00:17:37.707 "num_blocks": 8192, 00:17:37.707 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:37.707 "md_size": 32, 00:17:37.707 "md_interleave": false, 00:17:37.707 "dif_type": 0, 00:17:37.707 "assigned_rate_limits": { 00:17:37.707 "rw_ios_per_sec": 0, 00:17:37.707 "rw_mbytes_per_sec": 0, 00:17:37.707 "r_mbytes_per_sec": 0, 00:17:37.707 "w_mbytes_per_sec": 0 00:17:37.707 }, 00:17:37.707 "claimed": true, 00:17:37.707 "claim_type": "exclusive_write", 00:17:37.707 "zoned": false, 00:17:37.707 "supported_io_types": { 00:17:37.707 "read": true, 00:17:37.707 "write": true, 00:17:37.707 "unmap": true, 00:17:37.707 "flush": true, 00:17:37.707 "reset": true, 00:17:37.707 "nvme_admin": false, 00:17:37.707 "nvme_io": false, 00:17:37.707 "nvme_io_md": false, 00:17:37.707 "write_zeroes": true, 00:17:37.707 "zcopy": true, 00:17:37.707 "get_zone_info": false, 00:17:37.707 "zone_management": false, 00:17:37.707 "zone_append": false, 00:17:37.707 "compare": false, 00:17:37.707 "compare_and_write": false, 00:17:37.707 "abort": true, 00:17:37.707 "seek_hole": false, 00:17:37.707 "seek_data": false, 00:17:37.707 "copy": true, 00:17:37.707 "nvme_iov_md": false 00:17:37.707 }, 00:17:37.707 "memory_domains": [ 00:17:37.707 { 00:17:37.707 "dma_device_id": "system", 00:17:37.707 "dma_device_type": 1 00:17:37.707 }, 00:17:37.707 { 00:17:37.707 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.707 "dma_device_type": 2 00:17:37.707 } 00:17:37.707 ], 00:17:37.707 "driver_specific": { 00:17:37.707 "passthru": { 00:17:37.707 "name": "pt1", 00:17:37.707 "base_bdev_name": "malloc1" 00:17:37.707 } 00:17:37.707 } 00:17:37.707 }' 00:17:37.707 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:37.707 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:37.707 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:37.707 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:37.707 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:37.964 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:37.964 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:37.964 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:37.964 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:17:37.964 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:37.964 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:37.964 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:37.965 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:37.965 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:37.965 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:38.222 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:38.222 "name": "pt2", 00:17:38.222 "aliases": [ 00:17:38.223 "00000000-0000-0000-0000-000000000002" 00:17:38.223 ], 00:17:38.223 "product_name": "passthru", 00:17:38.223 "block_size": 4096, 00:17:38.223 "num_blocks": 8192, 00:17:38.223 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:38.223 "md_size": 32, 00:17:38.223 "md_interleave": false, 00:17:38.223 "dif_type": 0, 00:17:38.223 "assigned_rate_limits": { 00:17:38.223 "rw_ios_per_sec": 0, 00:17:38.223 "rw_mbytes_per_sec": 0, 00:17:38.223 "r_mbytes_per_sec": 0, 00:17:38.223 "w_mbytes_per_sec": 0 00:17:38.223 }, 00:17:38.223 "claimed": true, 00:17:38.223 "claim_type": "exclusive_write", 00:17:38.223 "zoned": false, 00:17:38.223 "supported_io_types": { 00:17:38.223 "read": true, 00:17:38.223 "write": true, 00:17:38.223 "unmap": true, 00:17:38.223 "flush": true, 00:17:38.223 "reset": true, 00:17:38.223 "nvme_admin": false, 00:17:38.223 "nvme_io": false, 00:17:38.223 "nvme_io_md": false, 00:17:38.223 "write_zeroes": true, 00:17:38.223 "zcopy": true, 00:17:38.223 "get_zone_info": false, 00:17:38.223 "zone_management": false, 00:17:38.223 "zone_append": false, 00:17:38.223 "compare": false, 00:17:38.223 "compare_and_write": false, 00:17:38.223 "abort": true, 00:17:38.223 "seek_hole": false, 00:17:38.223 "seek_data": false, 00:17:38.223 "copy": true, 00:17:38.223 "nvme_iov_md": false 00:17:38.223 }, 00:17:38.223 "memory_domains": [ 00:17:38.223 { 00:17:38.223 "dma_device_id": "system", 00:17:38.223 "dma_device_type": 1 00:17:38.223 }, 00:17:38.223 { 00:17:38.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:38.223 "dma_device_type": 2 00:17:38.223 } 00:17:38.223 ], 00:17:38.223 "driver_specific": { 00:17:38.223 "passthru": { 00:17:38.223 "name": "pt2", 00:17:38.223 "base_bdev_name": "malloc2" 00:17:38.223 } 00:17:38.223 } 00:17:38.223 }' 00:17:38.223 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:38.223 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:38.223 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:38.223 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:38.223 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:38.223 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:38.223 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:38.223 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:38.223 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:17:38.223 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:38.223 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:38.223 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:38.223 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:38.223 17:36:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:17:38.480 [2024-07-15 17:36:34.115986] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:38.480 17:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=c69379e1-42d0-11ef-96ac-773515fba644 00:17:38.480 17:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # '[' -z c69379e1-42d0-11ef-96ac-773515fba644 ']' 00:17:38.480 17:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:38.739 [2024-07-15 17:36:34.355949] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:38.739 [2024-07-15 17:36:34.355974] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:38.739 [2024-07-15 17:36:34.355996] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:38.739 [2024-07-15 17:36:34.356010] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:38.739 [2024-07-15 17:36:34.356015] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3d722b434f00 name raid_bdev1, state offline 00:17:38.739 17:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:17:38.739 17:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.997 17:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:17:38.997 17:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:17:38.997 17:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:38.997 17:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:39.255 17:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:39.255 17:36:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:39.513 17:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:39.513 17:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:39.771 17:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:17:39.771 17:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:39.771 17:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:17:39.771 17:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:39.771 17:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:39.771 17:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:39.771 17:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:39.771 17:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:39.771 17:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:39.771 17:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:39.771 17:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:39.771 17:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:39.771 17:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:40.029 [2024-07-15 17:36:35.659988] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:40.029 [2024-07-15 17:36:35.660582] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:40.029 [2024-07-15 17:36:35.660601] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:40.029 [2024-07-15 17:36:35.660639] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:40.029 [2024-07-15 17:36:35.660650] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:40.029 [2024-07-15 17:36:35.660654] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3d722b434c80 name raid_bdev1, state configuring 00:17:40.029 request: 00:17:40.029 { 00:17:40.029 "name": "raid_bdev1", 00:17:40.029 "raid_level": "raid1", 00:17:40.029 "base_bdevs": [ 00:17:40.029 "malloc1", 00:17:40.029 "malloc2" 00:17:40.029 ], 00:17:40.029 "superblock": false, 00:17:40.029 "method": "bdev_raid_create", 00:17:40.029 "req_id": 1 00:17:40.029 } 00:17:40.029 Got JSON-RPC error response 00:17:40.029 response: 00:17:40.029 { 00:17:40.029 "code": -17, 00:17:40.029 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:40.029 } 00:17:40.029 17:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # es=1 00:17:40.029 17:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:40.029 17:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:40.029 17:36:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:40.029 17:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.029 17:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:17:40.286 17:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:17:40.286 17:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:17:40.287 17:36:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:40.544 [2024-07-15 17:36:36.139990] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:40.544 [2024-07-15 17:36:36.140047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:40.544 [2024-07-15 17:36:36.140060] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3d722b434780 00:17:40.544 [2024-07-15 17:36:36.140069] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:40.544 [2024-07-15 17:36:36.140737] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:40.544 [2024-07-15 17:36:36.140779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:40.544 [2024-07-15 17:36:36.140812] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:40.544 [2024-07-15 17:36:36.140826] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:40.544 pt1 00:17:40.544 17:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:40.544 17:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:40.544 17:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:40.544 17:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:40.544 17:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:40.544 17:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:40.544 17:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:40.544 17:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:40.544 17:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:40.544 17:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:40.544 17:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:40.544 17:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.802 17:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:40.802 "name": "raid_bdev1", 00:17:40.802 "uuid": "c69379e1-42d0-11ef-96ac-773515fba644", 00:17:40.802 "strip_size_kb": 0, 00:17:40.802 "state": "configuring", 00:17:40.802 "raid_level": "raid1", 00:17:40.802 "superblock": true, 00:17:40.802 "num_base_bdevs": 2, 00:17:40.802 "num_base_bdevs_discovered": 1, 00:17:40.802 "num_base_bdevs_operational": 2, 00:17:40.802 "base_bdevs_list": [ 00:17:40.802 { 00:17:40.802 "name": "pt1", 00:17:40.802 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:40.802 "is_configured": true, 00:17:40.802 "data_offset": 256, 00:17:40.802 "data_size": 7936 00:17:40.802 }, 00:17:40.802 { 00:17:40.802 "name": null, 00:17:40.802 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:40.802 "is_configured": false, 00:17:40.802 "data_offset": 256, 00:17:40.802 "data_size": 7936 00:17:40.802 } 00:17:40.802 ] 00:17:40.802 }' 00:17:40.802 17:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:40.802 17:36:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.060 17:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:17:41.060 17:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:17:41.060 17:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:41.060 17:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:41.333 [2024-07-15 17:36:36.940016] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:41.333 [2024-07-15 17:36:36.940077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.333 [2024-07-15 17:36:36.940089] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3d722b434f00 00:17:41.333 [2024-07-15 17:36:36.940098] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.333 [2024-07-15 17:36:36.940167] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.333 [2024-07-15 17:36:36.940177] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:41.333 [2024-07-15 17:36:36.940200] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:41.333 [2024-07-15 17:36:36.940209] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:41.333 [2024-07-15 17:36:36.940227] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3d722b435180 00:17:41.333 [2024-07-15 17:36:36.940230] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:41.333 [2024-07-15 17:36:36.940250] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3d722b497e20 00:17:41.333 [2024-07-15 17:36:36.940272] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3d722b435180 00:17:41.333 [2024-07-15 17:36:36.940276] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3d722b435180 00:17:41.333 [2024-07-15 17:36:36.940290] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:41.333 pt2 00:17:41.333 17:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:41.333 17:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:41.333 17:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:41.333 17:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:41.333 17:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:41.333 17:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:41.333 17:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:41.333 17:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:41.333 17:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:41.333 17:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:41.333 17:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:41.333 17:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:41.333 17:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.333 17:36:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.602 17:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:41.602 "name": "raid_bdev1", 00:17:41.602 "uuid": "c69379e1-42d0-11ef-96ac-773515fba644", 00:17:41.602 "strip_size_kb": 0, 00:17:41.602 "state": "online", 00:17:41.602 "raid_level": "raid1", 00:17:41.602 "superblock": true, 00:17:41.602 "num_base_bdevs": 2, 00:17:41.602 "num_base_bdevs_discovered": 2, 00:17:41.602 "num_base_bdevs_operational": 2, 00:17:41.602 "base_bdevs_list": [ 00:17:41.602 { 00:17:41.602 "name": "pt1", 00:17:41.602 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:41.602 "is_configured": true, 00:17:41.602 "data_offset": 256, 00:17:41.602 "data_size": 7936 00:17:41.602 }, 00:17:41.602 { 00:17:41.602 "name": "pt2", 00:17:41.602 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:41.602 "is_configured": true, 00:17:41.602 "data_offset": 256, 00:17:41.602 "data_size": 7936 00:17:41.602 } 00:17:41.602 ] 00:17:41.602 }' 00:17:41.602 17:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:41.602 17:36:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:41.860 17:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:17:41.860 17:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:41.860 17:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:41.860 17:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:41.860 17:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:41.860 17:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:17:41.860 17:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:41.860 17:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:42.118 [2024-07-15 17:36:37.812058] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:42.118 17:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:42.118 "name": "raid_bdev1", 00:17:42.118 "aliases": [ 00:17:42.118 "c69379e1-42d0-11ef-96ac-773515fba644" 00:17:42.118 ], 00:17:42.118 "product_name": "Raid Volume", 00:17:42.118 "block_size": 4096, 00:17:42.118 "num_blocks": 7936, 00:17:42.118 "uuid": "c69379e1-42d0-11ef-96ac-773515fba644", 00:17:42.118 "md_size": 32, 00:17:42.118 "md_interleave": false, 00:17:42.118 "dif_type": 0, 00:17:42.118 "assigned_rate_limits": { 00:17:42.118 "rw_ios_per_sec": 0, 00:17:42.118 "rw_mbytes_per_sec": 0, 00:17:42.118 "r_mbytes_per_sec": 0, 00:17:42.118 "w_mbytes_per_sec": 0 00:17:42.118 }, 00:17:42.118 "claimed": false, 00:17:42.118 "zoned": false, 00:17:42.118 "supported_io_types": { 00:17:42.118 "read": true, 00:17:42.118 "write": true, 00:17:42.118 "unmap": false, 00:17:42.118 "flush": false, 00:17:42.118 "reset": true, 00:17:42.118 "nvme_admin": false, 00:17:42.118 "nvme_io": false, 00:17:42.118 "nvme_io_md": false, 00:17:42.118 "write_zeroes": true, 00:17:42.118 "zcopy": false, 00:17:42.118 "get_zone_info": false, 00:17:42.118 "zone_management": false, 00:17:42.118 "zone_append": false, 00:17:42.118 "compare": false, 00:17:42.118 "compare_and_write": false, 00:17:42.118 "abort": false, 00:17:42.118 "seek_hole": false, 00:17:42.118 "seek_data": false, 00:17:42.118 "copy": false, 00:17:42.118 "nvme_iov_md": false 00:17:42.118 }, 00:17:42.118 "memory_domains": [ 00:17:42.118 { 00:17:42.118 "dma_device_id": "system", 00:17:42.118 "dma_device_type": 1 00:17:42.118 }, 00:17:42.118 { 00:17:42.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.118 "dma_device_type": 2 00:17:42.118 }, 00:17:42.118 { 00:17:42.118 "dma_device_id": "system", 00:17:42.118 "dma_device_type": 1 00:17:42.118 }, 00:17:42.118 { 00:17:42.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.118 "dma_device_type": 2 00:17:42.118 } 00:17:42.118 ], 00:17:42.118 "driver_specific": { 00:17:42.118 "raid": { 00:17:42.118 "uuid": "c69379e1-42d0-11ef-96ac-773515fba644", 00:17:42.118 "strip_size_kb": 0, 00:17:42.118 "state": "online", 00:17:42.118 "raid_level": "raid1", 00:17:42.118 "superblock": true, 00:17:42.119 "num_base_bdevs": 2, 00:17:42.119 "num_base_bdevs_discovered": 2, 00:17:42.119 "num_base_bdevs_operational": 2, 00:17:42.119 "base_bdevs_list": [ 00:17:42.119 { 00:17:42.119 "name": "pt1", 00:17:42.119 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:42.119 "is_configured": true, 00:17:42.119 "data_offset": 256, 00:17:42.119 "data_size": 7936 00:17:42.119 }, 00:17:42.119 { 00:17:42.119 "name": "pt2", 00:17:42.119 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:42.119 "is_configured": true, 00:17:42.119 "data_offset": 256, 00:17:42.119 "data_size": 7936 00:17:42.119 } 00:17:42.119 ] 00:17:42.119 } 00:17:42.119 } 00:17:42.119 }' 00:17:42.119 17:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:42.119 17:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:42.119 pt2' 00:17:42.119 17:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:42.119 17:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:42.119 17:36:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:42.377 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:42.377 "name": "pt1", 00:17:42.377 "aliases": [ 00:17:42.377 "00000000-0000-0000-0000-000000000001" 00:17:42.377 ], 00:17:42.377 "product_name": "passthru", 00:17:42.377 "block_size": 4096, 00:17:42.377 "num_blocks": 8192, 00:17:42.377 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:42.377 "md_size": 32, 00:17:42.377 "md_interleave": false, 00:17:42.377 "dif_type": 0, 00:17:42.377 "assigned_rate_limits": { 00:17:42.377 "rw_ios_per_sec": 0, 00:17:42.377 "rw_mbytes_per_sec": 0, 00:17:42.377 "r_mbytes_per_sec": 0, 00:17:42.377 "w_mbytes_per_sec": 0 00:17:42.377 }, 00:17:42.377 "claimed": true, 00:17:42.377 "claim_type": "exclusive_write", 00:17:42.377 "zoned": false, 00:17:42.377 "supported_io_types": { 00:17:42.377 "read": true, 00:17:42.377 "write": true, 00:17:42.377 "unmap": true, 00:17:42.377 "flush": true, 00:17:42.377 "reset": true, 00:17:42.377 "nvme_admin": false, 00:17:42.377 "nvme_io": false, 00:17:42.377 "nvme_io_md": false, 00:17:42.377 "write_zeroes": true, 00:17:42.377 "zcopy": true, 00:17:42.377 "get_zone_info": false, 00:17:42.377 "zone_management": false, 00:17:42.377 "zone_append": false, 00:17:42.377 "compare": false, 00:17:42.377 "compare_and_write": false, 00:17:42.377 "abort": true, 00:17:42.377 "seek_hole": false, 00:17:42.377 "seek_data": false, 00:17:42.377 "copy": true, 00:17:42.377 "nvme_iov_md": false 00:17:42.377 }, 00:17:42.377 "memory_domains": [ 00:17:42.377 { 00:17:42.377 "dma_device_id": "system", 00:17:42.377 "dma_device_type": 1 00:17:42.377 }, 00:17:42.377 { 00:17:42.377 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.377 "dma_device_type": 2 00:17:42.377 } 00:17:42.377 ], 00:17:42.377 "driver_specific": { 00:17:42.377 "passthru": { 00:17:42.377 "name": "pt1", 00:17:42.377 "base_bdev_name": "malloc1" 00:17:42.377 } 00:17:42.377 } 00:17:42.377 }' 00:17:42.377 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:42.377 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:42.377 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:42.377 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:42.377 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:42.377 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:42.377 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:42.377 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:42.377 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:17:42.377 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:42.377 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:42.377 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:42.377 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:42.377 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:42.377 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:42.635 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:42.635 "name": "pt2", 00:17:42.635 "aliases": [ 00:17:42.635 "00000000-0000-0000-0000-000000000002" 00:17:42.635 ], 00:17:42.635 "product_name": "passthru", 00:17:42.635 "block_size": 4096, 00:17:42.635 "num_blocks": 8192, 00:17:42.635 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:42.635 "md_size": 32, 00:17:42.635 "md_interleave": false, 00:17:42.635 "dif_type": 0, 00:17:42.635 "assigned_rate_limits": { 00:17:42.635 "rw_ios_per_sec": 0, 00:17:42.635 "rw_mbytes_per_sec": 0, 00:17:42.635 "r_mbytes_per_sec": 0, 00:17:42.635 "w_mbytes_per_sec": 0 00:17:42.635 }, 00:17:42.635 "claimed": true, 00:17:42.635 "claim_type": "exclusive_write", 00:17:42.635 "zoned": false, 00:17:42.635 "supported_io_types": { 00:17:42.635 "read": true, 00:17:42.635 "write": true, 00:17:42.635 "unmap": true, 00:17:42.635 "flush": true, 00:17:42.635 "reset": true, 00:17:42.635 "nvme_admin": false, 00:17:42.635 "nvme_io": false, 00:17:42.635 "nvme_io_md": false, 00:17:42.635 "write_zeroes": true, 00:17:42.635 "zcopy": true, 00:17:42.635 "get_zone_info": false, 00:17:42.635 "zone_management": false, 00:17:42.635 "zone_append": false, 00:17:42.635 "compare": false, 00:17:42.635 "compare_and_write": false, 00:17:42.635 "abort": true, 00:17:42.635 "seek_hole": false, 00:17:42.635 "seek_data": false, 00:17:42.635 "copy": true, 00:17:42.635 "nvme_iov_md": false 00:17:42.635 }, 00:17:42.635 "memory_domains": [ 00:17:42.635 { 00:17:42.635 "dma_device_id": "system", 00:17:42.635 "dma_device_type": 1 00:17:42.635 }, 00:17:42.635 { 00:17:42.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.635 "dma_device_type": 2 00:17:42.635 } 00:17:42.635 ], 00:17:42.635 "driver_specific": { 00:17:42.635 "passthru": { 00:17:42.635 "name": "pt2", 00:17:42.635 "base_bdev_name": "malloc2" 00:17:42.635 } 00:17:42.635 } 00:17:42.635 }' 00:17:42.635 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:42.635 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:42.635 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:42.635 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:42.635 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:42.635 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:42.635 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:42.892 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:42.892 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:17:42.892 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:42.892 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:42.892 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:42.892 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:42.892 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:17:42.892 [2024-07-15 17:36:38.708069] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:43.150 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # '[' c69379e1-42d0-11ef-96ac-773515fba644 '!=' c69379e1-42d0-11ef-96ac-773515fba644 ']' 00:17:43.150 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:17:43.150 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:43.150 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:17:43.150 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:43.150 [2024-07-15 17:36:38.952044] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:43.150 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:43.150 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:43.150 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:43.150 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:43.150 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:43.150 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:43.150 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:43.150 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:43.150 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:43.150 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:43.150 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.150 17:36:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.407 17:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:43.407 "name": "raid_bdev1", 00:17:43.407 "uuid": "c69379e1-42d0-11ef-96ac-773515fba644", 00:17:43.407 "strip_size_kb": 0, 00:17:43.407 "state": "online", 00:17:43.407 "raid_level": "raid1", 00:17:43.407 "superblock": true, 00:17:43.407 "num_base_bdevs": 2, 00:17:43.407 "num_base_bdevs_discovered": 1, 00:17:43.407 "num_base_bdevs_operational": 1, 00:17:43.407 "base_bdevs_list": [ 00:17:43.407 { 00:17:43.407 "name": null, 00:17:43.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.407 "is_configured": false, 00:17:43.407 "data_offset": 256, 00:17:43.407 "data_size": 7936 00:17:43.407 }, 00:17:43.407 { 00:17:43.407 "name": "pt2", 00:17:43.407 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:43.407 "is_configured": true, 00:17:43.407 "data_offset": 256, 00:17:43.407 "data_size": 7936 00:17:43.407 } 00:17:43.407 ] 00:17:43.407 }' 00:17:43.407 17:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:43.407 17:36:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:43.971 17:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:43.971 [2024-07-15 17:36:39.756041] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:43.971 [2024-07-15 17:36:39.756072] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:43.971 [2024-07-15 17:36:39.756094] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:43.971 [2024-07-15 17:36:39.756107] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:43.971 [2024-07-15 17:36:39.756112] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3d722b435180 name raid_bdev1, state offline 00:17:43.971 17:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.971 17:36:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:17:44.228 17:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:17:44.228 17:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:17:44.228 17:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:17:44.228 17:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:44.228 17:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:44.485 17:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:17:44.485 17:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:44.485 17:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:17:44.485 17:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:17:44.485 17:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@518 -- # i=1 00:17:44.485 17:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:45.060 [2024-07-15 17:36:40.588060] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:45.060 [2024-07-15 17:36:40.588119] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.060 [2024-07-15 17:36:40.588132] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3d722b434f00 00:17:45.060 [2024-07-15 17:36:40.588147] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.060 [2024-07-15 17:36:40.588770] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.060 [2024-07-15 17:36:40.588791] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:45.060 [2024-07-15 17:36:40.588816] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:45.060 [2024-07-15 17:36:40.588829] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:45.060 [2024-07-15 17:36:40.588844] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3d722b435180 00:17:45.060 [2024-07-15 17:36:40.588848] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:45.060 [2024-07-15 17:36:40.588868] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3d722b497e20 00:17:45.060 [2024-07-15 17:36:40.588891] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3d722b435180 00:17:45.060 [2024-07-15 17:36:40.588895] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3d722b435180 00:17:45.060 [2024-07-15 17:36:40.588909] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.060 pt2 00:17:45.060 17:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:45.060 17:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:45.060 17:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:45.060 17:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:45.060 17:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:45.060 17:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:45.060 17:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:45.060 17:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:45.060 17:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:45.060 17:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:45.061 17:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.061 17:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.061 17:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:45.061 "name": "raid_bdev1", 00:17:45.061 "uuid": "c69379e1-42d0-11ef-96ac-773515fba644", 00:17:45.061 "strip_size_kb": 0, 00:17:45.061 "state": "online", 00:17:45.061 "raid_level": "raid1", 00:17:45.061 "superblock": true, 00:17:45.061 "num_base_bdevs": 2, 00:17:45.061 "num_base_bdevs_discovered": 1, 00:17:45.061 "num_base_bdevs_operational": 1, 00:17:45.061 "base_bdevs_list": [ 00:17:45.061 { 00:17:45.061 "name": null, 00:17:45.061 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.061 "is_configured": false, 00:17:45.061 "data_offset": 256, 00:17:45.061 "data_size": 7936 00:17:45.061 }, 00:17:45.061 { 00:17:45.061 "name": "pt2", 00:17:45.061 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:45.061 "is_configured": true, 00:17:45.061 "data_offset": 256, 00:17:45.061 "data_size": 7936 00:17:45.061 } 00:17:45.061 ] 00:17:45.061 }' 00:17:45.061 17:36:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:45.061 17:36:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:45.626 17:36:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:45.626 [2024-07-15 17:36:41.424077] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:45.626 [2024-07-15 17:36:41.424107] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:45.626 [2024-07-15 17:36:41.424131] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:45.626 [2024-07-15 17:36:41.424144] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:45.626 [2024-07-15 17:36:41.424149] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3d722b435180 name raid_bdev1, state offline 00:17:45.626 17:36:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.626 17:36:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:17:46.192 17:36:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:17:46.192 17:36:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:17:46.192 17:36:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:17:46.192 17:36:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:46.451 [2024-07-15 17:36:42.036090] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:46.451 [2024-07-15 17:36:42.036151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:46.451 [2024-07-15 17:36:42.036164] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3d722b434c80 00:17:46.451 [2024-07-15 17:36:42.036172] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:46.451 [2024-07-15 17:36:42.036778] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:46.451 [2024-07-15 17:36:42.036804] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:46.451 [2024-07-15 17:36:42.036829] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:46.451 [2024-07-15 17:36:42.036841] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:46.451 [2024-07-15 17:36:42.036862] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:46.451 [2024-07-15 17:36:42.036866] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:46.451 [2024-07-15 17:36:42.036872] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3d722b434780 name raid_bdev1, state configuring 00:17:46.451 [2024-07-15 17:36:42.036880] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:46.451 [2024-07-15 17:36:42.036894] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3d722b434780 00:17:46.451 [2024-07-15 17:36:42.036897] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:46.451 [2024-07-15 17:36:42.036918] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3d722b497e20 00:17:46.451 [2024-07-15 17:36:42.036945] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3d722b434780 00:17:46.451 [2024-07-15 17:36:42.036949] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3d722b434780 00:17:46.451 [2024-07-15 17:36:42.036971] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.451 pt1 00:17:46.451 17:36:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:17:46.451 17:36:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:46.451 17:36:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:46.451 17:36:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:46.451 17:36:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:46.451 17:36:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:46.451 17:36:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:46.451 17:36:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:46.451 17:36:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:46.451 17:36:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:46.451 17:36:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:46.451 17:36:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.451 17:36:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:46.709 17:36:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:46.709 "name": "raid_bdev1", 00:17:46.709 "uuid": "c69379e1-42d0-11ef-96ac-773515fba644", 00:17:46.709 "strip_size_kb": 0, 00:17:46.709 "state": "online", 00:17:46.709 "raid_level": "raid1", 00:17:46.709 "superblock": true, 00:17:46.709 "num_base_bdevs": 2, 00:17:46.709 "num_base_bdevs_discovered": 1, 00:17:46.709 "num_base_bdevs_operational": 1, 00:17:46.709 "base_bdevs_list": [ 00:17:46.709 { 00:17:46.709 "name": null, 00:17:46.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.709 "is_configured": false, 00:17:46.709 "data_offset": 256, 00:17:46.709 "data_size": 7936 00:17:46.709 }, 00:17:46.709 { 00:17:46.709 "name": "pt2", 00:17:46.709 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:46.709 "is_configured": true, 00:17:46.709 "data_offset": 256, 00:17:46.709 "data_size": 7936 00:17:46.709 } 00:17:46.709 ] 00:17:46.709 }' 00:17:46.709 17:36:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:46.709 17:36:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:46.967 17:36:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:17:46.967 17:36:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:47.224 17:36:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:17:47.224 17:36:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:47.224 17:36:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:17:47.483 [2024-07-15 17:36:43.060145] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:47.483 17:36:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' c69379e1-42d0-11ef-96ac-773515fba644 '!=' c69379e1-42d0-11ef-96ac-773515fba644 ']' 00:17:47.483 17:36:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@562 -- # killprocess 66477 00:17:47.483 17:36:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@948 -- # '[' -z 66477 ']' 00:17:47.483 17:36:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # kill -0 66477 00:17:47.483 17:36:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # uname 00:17:47.483 17:36:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:17:47.483 17:36:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps -c -o command 66477 00:17:47.483 17:36:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # tail -1 00:17:47.483 17:36:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:17:47.483 killing process with pid 66477 00:17:47.483 17:36:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:17:47.483 17:36:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66477' 00:17:47.483 17:36:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@967 -- # kill 66477 00:17:47.483 [2024-07-15 17:36:43.090095] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:47.483 [2024-07-15 17:36:43.090127] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:47.483 [2024-07-15 17:36:43.090141] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:47.483 [2024-07-15 17:36:43.090145] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3d722b434780 name raid_bdev1, state offline 00:17:47.483 17:36:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # wait 66477 00:17:47.483 [2024-07-15 17:36:43.101680] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:47.483 17:36:43 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@564 -- # return 0 00:17:47.483 00:17:47.483 real 0m13.190s 00:17:47.483 user 0m23.428s 00:17:47.483 sys 0m2.172s 00:17:47.483 ************************************ 00:17:47.483 END TEST raid_superblock_test_md_separate 00:17:47.483 ************************************ 00:17:47.483 17:36:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:47.483 17:36:43 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:47.742 17:36:43 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:47.742 17:36:43 bdev_raid -- bdev/bdev_raid.sh@907 -- # '[' '' = true ']' 00:17:47.742 17:36:43 bdev_raid -- bdev/bdev_raid.sh@911 -- # base_malloc_params='-m 32 -i' 00:17:47.742 17:36:43 bdev_raid -- bdev/bdev_raid.sh@912 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:17:47.742 17:36:43 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:47.742 17:36:43 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:47.742 17:36:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:47.742 ************************************ 00:17:47.742 START TEST raid_state_function_test_sb_md_interleaved 00:17:47.742 ************************************ 00:17:47.742 17:36:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:17:47.742 17:36:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:17:47.742 17:36:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:17:47.742 17:36:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:17:47.742 17:36:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:47.742 17:36:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:47.742 17:36:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:47.742 17:36:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:47.742 17:36:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:47.742 17:36:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:47.742 17:36:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:47.742 17:36:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:47.742 17:36:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:47.742 17:36:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:47.742 17:36:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:47.742 17:36:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:47.742 17:36:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:47.742 17:36:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:47.742 17:36:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:47.742 17:36:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:17:47.742 17:36:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:17:47.742 17:36:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:17:47.742 17:36:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:17:47.742 17:36:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=66868 00:17:47.742 Process raid pid: 66868 00:17:47.742 17:36:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 66868' 00:17:47.742 17:36:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 66868 /var/tmp/spdk-raid.sock 00:17:47.742 17:36:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 66868 ']' 00:17:47.742 17:36:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:47.742 17:36:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:47.742 17:36:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:47.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:47.742 17:36:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:47.742 17:36:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:47.742 17:36:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:47.742 [2024-07-15 17:36:43.337197] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:17:47.742 [2024-07-15 17:36:43.337453] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:48.309 EAL: TSC is not safe to use in SMP mode 00:17:48.309 EAL: TSC is not invariant 00:17:48.309 [2024-07-15 17:36:43.857566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.309 [2024-07-15 17:36:43.954485] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:48.309 [2024-07-15 17:36:43.957069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.309 [2024-07-15 17:36:43.957976] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:48.309 [2024-07-15 17:36:43.957993] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:48.567 17:36:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:48.567 17:36:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:17:48.567 17:36:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:48.826 [2024-07-15 17:36:44.579365] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:48.826 [2024-07-15 17:36:44.579421] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:48.826 [2024-07-15 17:36:44.579427] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:48.826 [2024-07-15 17:36:44.579436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:48.826 17:36:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:48.826 17:36:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:48.826 17:36:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:48.826 17:36:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:48.826 17:36:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:48.826 17:36:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:48.826 17:36:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:48.826 17:36:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:48.826 17:36:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:48.826 17:36:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:48.826 17:36:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.826 17:36:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:49.084 17:36:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:49.084 "name": "Existed_Raid", 00:17:49.084 "uuid": "cdd68529-42d0-11ef-96ac-773515fba644", 00:17:49.084 "strip_size_kb": 0, 00:17:49.084 "state": "configuring", 00:17:49.084 "raid_level": "raid1", 00:17:49.084 "superblock": true, 00:17:49.084 "num_base_bdevs": 2, 00:17:49.084 "num_base_bdevs_discovered": 0, 00:17:49.084 "num_base_bdevs_operational": 2, 00:17:49.084 "base_bdevs_list": [ 00:17:49.084 { 00:17:49.084 "name": "BaseBdev1", 00:17:49.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.084 "is_configured": false, 00:17:49.084 "data_offset": 0, 00:17:49.084 "data_size": 0 00:17:49.084 }, 00:17:49.084 { 00:17:49.084 "name": "BaseBdev2", 00:17:49.084 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.084 "is_configured": false, 00:17:49.084 "data_offset": 0, 00:17:49.084 "data_size": 0 00:17:49.084 } 00:17:49.084 ] 00:17:49.084 }' 00:17:49.084 17:36:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:49.084 17:36:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.362 17:36:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:49.929 [2024-07-15 17:36:45.455356] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:49.929 [2024-07-15 17:36:45.455391] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x366f92a34500 name Existed_Raid, state configuring 00:17:49.929 17:36:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:49.929 [2024-07-15 17:36:45.687372] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:49.929 [2024-07-15 17:36:45.687431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:49.929 [2024-07-15 17:36:45.687437] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:49.929 [2024-07-15 17:36:45.687446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:49.929 17:36:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:17:50.186 [2024-07-15 17:36:45.948281] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:50.186 BaseBdev1 00:17:50.186 17:36:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:50.186 17:36:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:50.186 17:36:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:50.186 17:36:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:17:50.186 17:36:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:50.186 17:36:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:50.186 17:36:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:50.443 17:36:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:50.702 [ 00:17:50.702 { 00:17:50.702 "name": "BaseBdev1", 00:17:50.702 "aliases": [ 00:17:50.702 "cea74371-42d0-11ef-96ac-773515fba644" 00:17:50.702 ], 00:17:50.702 "product_name": "Malloc disk", 00:17:50.702 "block_size": 4128, 00:17:50.702 "num_blocks": 8192, 00:17:50.702 "uuid": "cea74371-42d0-11ef-96ac-773515fba644", 00:17:50.702 "md_size": 32, 00:17:50.702 "md_interleave": true, 00:17:50.702 "dif_type": 0, 00:17:50.702 "assigned_rate_limits": { 00:17:50.702 "rw_ios_per_sec": 0, 00:17:50.702 "rw_mbytes_per_sec": 0, 00:17:50.702 "r_mbytes_per_sec": 0, 00:17:50.702 "w_mbytes_per_sec": 0 00:17:50.702 }, 00:17:50.702 "claimed": true, 00:17:50.702 "claim_type": "exclusive_write", 00:17:50.702 "zoned": false, 00:17:50.702 "supported_io_types": { 00:17:50.702 "read": true, 00:17:50.702 "write": true, 00:17:50.702 "unmap": true, 00:17:50.702 "flush": true, 00:17:50.702 "reset": true, 00:17:50.702 "nvme_admin": false, 00:17:50.702 "nvme_io": false, 00:17:50.702 "nvme_io_md": false, 00:17:50.702 "write_zeroes": true, 00:17:50.702 "zcopy": true, 00:17:50.702 "get_zone_info": false, 00:17:50.702 "zone_management": false, 00:17:50.702 "zone_append": false, 00:17:50.702 "compare": false, 00:17:50.702 "compare_and_write": false, 00:17:50.702 "abort": true, 00:17:50.702 "seek_hole": false, 00:17:50.702 "seek_data": false, 00:17:50.702 "copy": true, 00:17:50.702 "nvme_iov_md": false 00:17:50.702 }, 00:17:50.702 "memory_domains": [ 00:17:50.702 { 00:17:50.702 "dma_device_id": "system", 00:17:50.702 "dma_device_type": 1 00:17:50.702 }, 00:17:50.702 { 00:17:50.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:50.702 "dma_device_type": 2 00:17:50.702 } 00:17:50.702 ], 00:17:50.702 "driver_specific": {} 00:17:50.702 } 00:17:50.702 ] 00:17:50.702 17:36:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:17:50.702 17:36:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:50.702 17:36:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:50.702 17:36:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:50.702 17:36:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:50.702 17:36:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:50.702 17:36:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:50.702 17:36:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:50.702 17:36:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:50.702 17:36:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:50.702 17:36:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:50.702 17:36:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.702 17:36:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:50.960 17:36:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:50.960 "name": "Existed_Raid", 00:17:50.960 "uuid": "ce7f96ab-42d0-11ef-96ac-773515fba644", 00:17:50.960 "strip_size_kb": 0, 00:17:50.960 "state": "configuring", 00:17:50.960 "raid_level": "raid1", 00:17:50.960 "superblock": true, 00:17:50.960 "num_base_bdevs": 2, 00:17:50.960 "num_base_bdevs_discovered": 1, 00:17:50.960 "num_base_bdevs_operational": 2, 00:17:50.960 "base_bdevs_list": [ 00:17:50.960 { 00:17:50.960 "name": "BaseBdev1", 00:17:50.960 "uuid": "cea74371-42d0-11ef-96ac-773515fba644", 00:17:50.960 "is_configured": true, 00:17:50.960 "data_offset": 256, 00:17:50.960 "data_size": 7936 00:17:50.960 }, 00:17:50.960 { 00:17:50.960 "name": "BaseBdev2", 00:17:50.960 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.960 "is_configured": false, 00:17:50.960 "data_offset": 0, 00:17:50.960 "data_size": 0 00:17:50.960 } 00:17:50.960 ] 00:17:50.960 }' 00:17:50.960 17:36:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:50.960 17:36:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.525 17:36:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:51.525 [2024-07-15 17:36:47.327377] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:51.525 [2024-07-15 17:36:47.327412] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x366f92a34500 name Existed_Raid, state configuring 00:17:51.525 17:36:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:52.092 [2024-07-15 17:36:47.619400] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:52.092 [2024-07-15 17:36:47.620208] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:52.092 [2024-07-15 17:36:47.620245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:52.092 17:36:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:52.092 17:36:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:52.092 17:36:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:52.092 17:36:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:52.092 17:36:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:52.092 17:36:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:52.092 17:36:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:52.092 17:36:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:52.092 17:36:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:52.092 17:36:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:52.092 17:36:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:52.092 17:36:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:52.092 17:36:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:52.092 17:36:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:52.093 17:36:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:52.093 "name": "Existed_Raid", 00:17:52.093 "uuid": "cfa66467-42d0-11ef-96ac-773515fba644", 00:17:52.093 "strip_size_kb": 0, 00:17:52.093 "state": "configuring", 00:17:52.093 "raid_level": "raid1", 00:17:52.093 "superblock": true, 00:17:52.093 "num_base_bdevs": 2, 00:17:52.093 "num_base_bdevs_discovered": 1, 00:17:52.093 "num_base_bdevs_operational": 2, 00:17:52.093 "base_bdevs_list": [ 00:17:52.093 { 00:17:52.093 "name": "BaseBdev1", 00:17:52.093 "uuid": "cea74371-42d0-11ef-96ac-773515fba644", 00:17:52.093 "is_configured": true, 00:17:52.093 "data_offset": 256, 00:17:52.093 "data_size": 7936 00:17:52.093 }, 00:17:52.093 { 00:17:52.093 "name": "BaseBdev2", 00:17:52.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:52.093 "is_configured": false, 00:17:52.093 "data_offset": 0, 00:17:52.093 "data_size": 0 00:17:52.093 } 00:17:52.093 ] 00:17:52.093 }' 00:17:52.093 17:36:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:52.093 17:36:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:52.659 17:36:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:17:52.659 [2024-07-15 17:36:48.407489] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:52.659 [2024-07-15 17:36:48.407548] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x366f92a34a00 00:17:52.659 [2024-07-15 17:36:48.407555] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:52.659 [2024-07-15 17:36:48.407576] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x366f92a97e20 00:17:52.659 [2024-07-15 17:36:48.407590] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x366f92a34a00 00:17:52.659 [2024-07-15 17:36:48.407593] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x366f92a34a00 00:17:52.659 [2024-07-15 17:36:48.407605] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:52.659 BaseBdev2 00:17:52.659 17:36:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:52.659 17:36:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:52.659 17:36:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:52.659 17:36:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:17:52.659 17:36:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:52.659 17:36:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:52.659 17:36:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:52.918 17:36:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:53.484 [ 00:17:53.484 { 00:17:53.484 "name": "BaseBdev2", 00:17:53.484 "aliases": [ 00:17:53.484 "d01ea296-42d0-11ef-96ac-773515fba644" 00:17:53.484 ], 00:17:53.484 "product_name": "Malloc disk", 00:17:53.484 "block_size": 4128, 00:17:53.484 "num_blocks": 8192, 00:17:53.484 "uuid": "d01ea296-42d0-11ef-96ac-773515fba644", 00:17:53.484 "md_size": 32, 00:17:53.484 "md_interleave": true, 00:17:53.484 "dif_type": 0, 00:17:53.484 "assigned_rate_limits": { 00:17:53.484 "rw_ios_per_sec": 0, 00:17:53.484 "rw_mbytes_per_sec": 0, 00:17:53.484 "r_mbytes_per_sec": 0, 00:17:53.484 "w_mbytes_per_sec": 0 00:17:53.484 }, 00:17:53.484 "claimed": true, 00:17:53.484 "claim_type": "exclusive_write", 00:17:53.484 "zoned": false, 00:17:53.484 "supported_io_types": { 00:17:53.484 "read": true, 00:17:53.484 "write": true, 00:17:53.484 "unmap": true, 00:17:53.484 "flush": true, 00:17:53.484 "reset": true, 00:17:53.484 "nvme_admin": false, 00:17:53.484 "nvme_io": false, 00:17:53.484 "nvme_io_md": false, 00:17:53.484 "write_zeroes": true, 00:17:53.484 "zcopy": true, 00:17:53.484 "get_zone_info": false, 00:17:53.484 "zone_management": false, 00:17:53.484 "zone_append": false, 00:17:53.484 "compare": false, 00:17:53.484 "compare_and_write": false, 00:17:53.484 "abort": true, 00:17:53.484 "seek_hole": false, 00:17:53.484 "seek_data": false, 00:17:53.484 "copy": true, 00:17:53.484 "nvme_iov_md": false 00:17:53.484 }, 00:17:53.484 "memory_domains": [ 00:17:53.484 { 00:17:53.484 "dma_device_id": "system", 00:17:53.484 "dma_device_type": 1 00:17:53.484 }, 00:17:53.484 { 00:17:53.484 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:53.484 "dma_device_type": 2 00:17:53.484 } 00:17:53.484 ], 00:17:53.484 "driver_specific": {} 00:17:53.484 } 00:17:53.484 ] 00:17:53.484 17:36:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:17:53.484 17:36:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:53.484 17:36:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:53.484 17:36:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:53.484 17:36:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:53.484 17:36:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:53.484 17:36:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:53.484 17:36:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:53.484 17:36:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:53.484 17:36:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:53.484 17:36:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:53.484 17:36:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:53.485 17:36:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:53.485 17:36:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:53.485 17:36:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:53.485 17:36:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:53.485 "name": "Existed_Raid", 00:17:53.485 "uuid": "cfa66467-42d0-11ef-96ac-773515fba644", 00:17:53.485 "strip_size_kb": 0, 00:17:53.485 "state": "online", 00:17:53.485 "raid_level": "raid1", 00:17:53.485 "superblock": true, 00:17:53.485 "num_base_bdevs": 2, 00:17:53.485 "num_base_bdevs_discovered": 2, 00:17:53.485 "num_base_bdevs_operational": 2, 00:17:53.485 "base_bdevs_list": [ 00:17:53.485 { 00:17:53.485 "name": "BaseBdev1", 00:17:53.485 "uuid": "cea74371-42d0-11ef-96ac-773515fba644", 00:17:53.485 "is_configured": true, 00:17:53.485 "data_offset": 256, 00:17:53.485 "data_size": 7936 00:17:53.485 }, 00:17:53.485 { 00:17:53.485 "name": "BaseBdev2", 00:17:53.485 "uuid": "d01ea296-42d0-11ef-96ac-773515fba644", 00:17:53.485 "is_configured": true, 00:17:53.485 "data_offset": 256, 00:17:53.485 "data_size": 7936 00:17:53.485 } 00:17:53.485 ] 00:17:53.485 }' 00:17:53.485 17:36:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:53.485 17:36:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:54.051 17:36:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:54.051 17:36:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:54.051 17:36:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:54.051 17:36:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:54.051 17:36:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:54.051 17:36:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:17:54.051 17:36:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:54.051 17:36:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:54.051 [2024-07-15 17:36:49.871472] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:54.309 17:36:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:54.309 "name": "Existed_Raid", 00:17:54.309 "aliases": [ 00:17:54.309 "cfa66467-42d0-11ef-96ac-773515fba644" 00:17:54.309 ], 00:17:54.309 "product_name": "Raid Volume", 00:17:54.309 "block_size": 4128, 00:17:54.309 "num_blocks": 7936, 00:17:54.309 "uuid": "cfa66467-42d0-11ef-96ac-773515fba644", 00:17:54.309 "md_size": 32, 00:17:54.309 "md_interleave": true, 00:17:54.309 "dif_type": 0, 00:17:54.309 "assigned_rate_limits": { 00:17:54.309 "rw_ios_per_sec": 0, 00:17:54.309 "rw_mbytes_per_sec": 0, 00:17:54.309 "r_mbytes_per_sec": 0, 00:17:54.309 "w_mbytes_per_sec": 0 00:17:54.309 }, 00:17:54.309 "claimed": false, 00:17:54.309 "zoned": false, 00:17:54.309 "supported_io_types": { 00:17:54.309 "read": true, 00:17:54.309 "write": true, 00:17:54.309 "unmap": false, 00:17:54.309 "flush": false, 00:17:54.309 "reset": true, 00:17:54.309 "nvme_admin": false, 00:17:54.309 "nvme_io": false, 00:17:54.309 "nvme_io_md": false, 00:17:54.309 "write_zeroes": true, 00:17:54.309 "zcopy": false, 00:17:54.309 "get_zone_info": false, 00:17:54.309 "zone_management": false, 00:17:54.309 "zone_append": false, 00:17:54.309 "compare": false, 00:17:54.309 "compare_and_write": false, 00:17:54.309 "abort": false, 00:17:54.309 "seek_hole": false, 00:17:54.309 "seek_data": false, 00:17:54.309 "copy": false, 00:17:54.309 "nvme_iov_md": false 00:17:54.309 }, 00:17:54.309 "memory_domains": [ 00:17:54.309 { 00:17:54.309 "dma_device_id": "system", 00:17:54.309 "dma_device_type": 1 00:17:54.309 }, 00:17:54.309 { 00:17:54.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.309 "dma_device_type": 2 00:17:54.309 }, 00:17:54.309 { 00:17:54.309 "dma_device_id": "system", 00:17:54.309 "dma_device_type": 1 00:17:54.309 }, 00:17:54.309 { 00:17:54.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.309 "dma_device_type": 2 00:17:54.309 } 00:17:54.309 ], 00:17:54.309 "driver_specific": { 00:17:54.310 "raid": { 00:17:54.310 "uuid": "cfa66467-42d0-11ef-96ac-773515fba644", 00:17:54.310 "strip_size_kb": 0, 00:17:54.310 "state": "online", 00:17:54.310 "raid_level": "raid1", 00:17:54.310 "superblock": true, 00:17:54.310 "num_base_bdevs": 2, 00:17:54.310 "num_base_bdevs_discovered": 2, 00:17:54.310 "num_base_bdevs_operational": 2, 00:17:54.310 "base_bdevs_list": [ 00:17:54.310 { 00:17:54.310 "name": "BaseBdev1", 00:17:54.310 "uuid": "cea74371-42d0-11ef-96ac-773515fba644", 00:17:54.310 "is_configured": true, 00:17:54.310 "data_offset": 256, 00:17:54.310 "data_size": 7936 00:17:54.310 }, 00:17:54.310 { 00:17:54.310 "name": "BaseBdev2", 00:17:54.310 "uuid": "d01ea296-42d0-11ef-96ac-773515fba644", 00:17:54.310 "is_configured": true, 00:17:54.310 "data_offset": 256, 00:17:54.310 "data_size": 7936 00:17:54.310 } 00:17:54.310 ] 00:17:54.310 } 00:17:54.310 } 00:17:54.310 }' 00:17:54.310 17:36:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:54.310 17:36:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:54.310 BaseBdev2' 00:17:54.310 17:36:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:54.310 17:36:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:54.310 17:36:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:54.568 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:54.568 "name": "BaseBdev1", 00:17:54.568 "aliases": [ 00:17:54.568 "cea74371-42d0-11ef-96ac-773515fba644" 00:17:54.568 ], 00:17:54.568 "product_name": "Malloc disk", 00:17:54.568 "block_size": 4128, 00:17:54.568 "num_blocks": 8192, 00:17:54.568 "uuid": "cea74371-42d0-11ef-96ac-773515fba644", 00:17:54.568 "md_size": 32, 00:17:54.568 "md_interleave": true, 00:17:54.568 "dif_type": 0, 00:17:54.568 "assigned_rate_limits": { 00:17:54.568 "rw_ios_per_sec": 0, 00:17:54.568 "rw_mbytes_per_sec": 0, 00:17:54.568 "r_mbytes_per_sec": 0, 00:17:54.568 "w_mbytes_per_sec": 0 00:17:54.568 }, 00:17:54.568 "claimed": true, 00:17:54.568 "claim_type": "exclusive_write", 00:17:54.568 "zoned": false, 00:17:54.568 "supported_io_types": { 00:17:54.568 "read": true, 00:17:54.568 "write": true, 00:17:54.568 "unmap": true, 00:17:54.568 "flush": true, 00:17:54.568 "reset": true, 00:17:54.568 "nvme_admin": false, 00:17:54.568 "nvme_io": false, 00:17:54.568 "nvme_io_md": false, 00:17:54.568 "write_zeroes": true, 00:17:54.568 "zcopy": true, 00:17:54.568 "get_zone_info": false, 00:17:54.568 "zone_management": false, 00:17:54.568 "zone_append": false, 00:17:54.568 "compare": false, 00:17:54.568 "compare_and_write": false, 00:17:54.568 "abort": true, 00:17:54.568 "seek_hole": false, 00:17:54.568 "seek_data": false, 00:17:54.568 "copy": true, 00:17:54.568 "nvme_iov_md": false 00:17:54.568 }, 00:17:54.568 "memory_domains": [ 00:17:54.568 { 00:17:54.568 "dma_device_id": "system", 00:17:54.568 "dma_device_type": 1 00:17:54.568 }, 00:17:54.568 { 00:17:54.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.568 "dma_device_type": 2 00:17:54.568 } 00:17:54.568 ], 00:17:54.568 "driver_specific": {} 00:17:54.568 }' 00:17:54.568 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:54.568 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:54.568 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:17:54.568 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:54.568 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:54.568 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:54.568 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:54.568 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:54.568 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:17:54.568 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:54.568 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:54.568 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:54.568 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:54.568 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:54.568 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:54.826 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:54.826 "name": "BaseBdev2", 00:17:54.826 "aliases": [ 00:17:54.826 "d01ea296-42d0-11ef-96ac-773515fba644" 00:17:54.826 ], 00:17:54.826 "product_name": "Malloc disk", 00:17:54.826 "block_size": 4128, 00:17:54.826 "num_blocks": 8192, 00:17:54.826 "uuid": "d01ea296-42d0-11ef-96ac-773515fba644", 00:17:54.826 "md_size": 32, 00:17:54.826 "md_interleave": true, 00:17:54.826 "dif_type": 0, 00:17:54.826 "assigned_rate_limits": { 00:17:54.826 "rw_ios_per_sec": 0, 00:17:54.826 "rw_mbytes_per_sec": 0, 00:17:54.826 "r_mbytes_per_sec": 0, 00:17:54.826 "w_mbytes_per_sec": 0 00:17:54.826 }, 00:17:54.826 "claimed": true, 00:17:54.826 "claim_type": "exclusive_write", 00:17:54.826 "zoned": false, 00:17:54.826 "supported_io_types": { 00:17:54.826 "read": true, 00:17:54.826 "write": true, 00:17:54.826 "unmap": true, 00:17:54.826 "flush": true, 00:17:54.826 "reset": true, 00:17:54.826 "nvme_admin": false, 00:17:54.826 "nvme_io": false, 00:17:54.826 "nvme_io_md": false, 00:17:54.826 "write_zeroes": true, 00:17:54.826 "zcopy": true, 00:17:54.826 "get_zone_info": false, 00:17:54.826 "zone_management": false, 00:17:54.826 "zone_append": false, 00:17:54.826 "compare": false, 00:17:54.826 "compare_and_write": false, 00:17:54.826 "abort": true, 00:17:54.826 "seek_hole": false, 00:17:54.826 "seek_data": false, 00:17:54.826 "copy": true, 00:17:54.826 "nvme_iov_md": false 00:17:54.826 }, 00:17:54.826 "memory_domains": [ 00:17:54.826 { 00:17:54.826 "dma_device_id": "system", 00:17:54.826 "dma_device_type": 1 00:17:54.826 }, 00:17:54.826 { 00:17:54.826 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:54.826 "dma_device_type": 2 00:17:54.826 } 00:17:54.826 ], 00:17:54.826 "driver_specific": {} 00:17:54.826 }' 00:17:54.826 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:54.826 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:54.826 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:17:54.826 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:54.826 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:54.826 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:54.826 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:54.826 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:54.826 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:17:54.826 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:54.826 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:54.826 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:54.826 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:55.084 [2024-07-15 17:36:50.863460] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:55.084 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:55.084 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:17:55.084 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:55.084 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:17:55.084 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:17:55.084 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:55.084 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:55.084 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:55.084 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:55.084 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:55.084 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:55.084 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:55.084 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:55.084 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:55.084 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:55.084 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.084 17:36:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:55.341 17:36:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:55.341 "name": "Existed_Raid", 00:17:55.341 "uuid": "cfa66467-42d0-11ef-96ac-773515fba644", 00:17:55.341 "strip_size_kb": 0, 00:17:55.341 "state": "online", 00:17:55.341 "raid_level": "raid1", 00:17:55.341 "superblock": true, 00:17:55.341 "num_base_bdevs": 2, 00:17:55.341 "num_base_bdevs_discovered": 1, 00:17:55.341 "num_base_bdevs_operational": 1, 00:17:55.341 "base_bdevs_list": [ 00:17:55.341 { 00:17:55.341 "name": null, 00:17:55.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:55.341 "is_configured": false, 00:17:55.341 "data_offset": 256, 00:17:55.341 "data_size": 7936 00:17:55.341 }, 00:17:55.341 { 00:17:55.341 "name": "BaseBdev2", 00:17:55.341 "uuid": "d01ea296-42d0-11ef-96ac-773515fba644", 00:17:55.341 "is_configured": true, 00:17:55.341 "data_offset": 256, 00:17:55.341 "data_size": 7936 00:17:55.341 } 00:17:55.341 ] 00:17:55.341 }' 00:17:55.341 17:36:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:55.341 17:36:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.907 17:36:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:55.907 17:36:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:55.907 17:36:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.907 17:36:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:55.907 17:36:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:55.907 17:36:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:55.907 17:36:51 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:56.164 [2024-07-15 17:36:51.993328] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:56.164 [2024-07-15 17:36:51.993386] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:56.422 [2024-07-15 17:36:52.001486] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:56.422 [2024-07-15 17:36:52.001505] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:56.422 [2024-07-15 17:36:52.001510] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x366f92a34a00 name Existed_Raid, state offline 00:17:56.422 17:36:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:56.422 17:36:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:56.422 17:36:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:56.422 17:36:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.679 17:36:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:56.679 17:36:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:56.679 17:36:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:17:56.679 17:36:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 66868 00:17:56.679 17:36:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 66868 ']' 00:17:56.679 17:36:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 66868 00:17:56.679 17:36:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:17:56.679 17:36:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:17:56.679 17:36:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps -c -o command 66868 00:17:56.679 17:36:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # tail -1 00:17:56.679 17:36:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:17:56.679 killing process with pid 66868 00:17:56.679 17:36:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:17:56.679 17:36:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66868' 00:17:56.679 17:36:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 66868 00:17:56.679 17:36:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 66868 00:17:56.679 [2024-07-15 17:36:52.294806] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:56.679 [2024-07-15 17:36:52.294858] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:56.937 17:36:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:17:56.937 00:17:56.937 real 0m9.209s 00:17:56.937 user 0m16.137s 00:17:56.937 sys 0m1.463s 00:17:56.937 17:36:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:56.937 ************************************ 00:17:56.937 END TEST raid_state_function_test_sb_md_interleaved 00:17:56.937 ************************************ 00:17:56.937 17:36:52 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.937 17:36:52 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:56.937 17:36:52 bdev_raid -- bdev/bdev_raid.sh@913 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:17:56.937 17:36:52 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:56.937 17:36:52 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:56.937 17:36:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:56.937 ************************************ 00:17:56.937 START TEST raid_superblock_test_md_interleaved 00:17:56.937 ************************************ 00:17:56.937 17:36:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:17:56.937 17:36:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:17:56.937 17:36:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:17:56.937 17:36:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:17:56.937 17:36:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:17:56.937 17:36:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:17:56.937 17:36:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:17:56.937 17:36:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:17:56.937 17:36:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:17:56.937 17:36:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:17:56.937 17:36:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local strip_size 00:17:56.937 17:36:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:17:56.937 17:36:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:17:56.937 17:36:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:17:56.937 17:36:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:17:56.937 17:36:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:17:56.938 17:36:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # raid_pid=67142 00:17:56.938 17:36:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # waitforlisten 67142 /var/tmp/spdk-raid.sock 00:17:56.938 17:36:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 67142 ']' 00:17:56.938 17:36:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:56.938 17:36:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:56.938 17:36:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:56.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:56.938 17:36:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:56.938 17:36:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:56.938 17:36:52 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.938 [2024-07-15 17:36:52.595015] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:17:56.938 [2024-07-15 17:36:52.595294] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:57.580 EAL: TSC is not safe to use in SMP mode 00:17:57.580 EAL: TSC is not invariant 00:17:57.580 [2024-07-15 17:36:53.110705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.580 [2024-07-15 17:36:53.229477] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:57.580 [2024-07-15 17:36:53.232157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.580 [2024-07-15 17:36:53.233419] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:57.580 [2024-07-15 17:36:53.233446] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:57.837 17:36:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:57.837 17:36:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:17:57.837 17:36:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:17:57.837 17:36:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:57.837 17:36:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:17:57.837 17:36:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:17:57.837 17:36:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:57.837 17:36:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:57.837 17:36:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:57.837 17:36:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:57.837 17:36:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:17:58.095 malloc1 00:17:58.354 17:36:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:58.354 [2024-07-15 17:36:54.156205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:58.354 [2024-07-15 17:36:54.156269] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.354 [2024-07-15 17:36:54.156282] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e0bd3434780 00:17:58.354 [2024-07-15 17:36:54.156290] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.354 [2024-07-15 17:36:54.157116] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.354 [2024-07-15 17:36:54.157145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:58.354 pt1 00:17:58.354 17:36:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:58.354 17:36:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:58.354 17:36:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:17:58.354 17:36:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:17:58.354 17:36:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:58.354 17:36:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:58.354 17:36:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:58.354 17:36:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:58.354 17:36:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:17:58.921 malloc2 00:17:58.921 17:36:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:58.921 [2024-07-15 17:36:54.740210] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:58.921 [2024-07-15 17:36:54.740270] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.921 [2024-07-15 17:36:54.740282] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e0bd3434c80 00:17:58.921 [2024-07-15 17:36:54.740290] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.921 [2024-07-15 17:36:54.740913] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.921 [2024-07-15 17:36:54.740940] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:58.921 pt2 00:17:59.179 17:36:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:59.180 17:36:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:59.180 17:36:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:17:59.437 [2024-07-15 17:36:55.028224] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:59.438 [2024-07-15 17:36:55.028841] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:59.438 [2024-07-15 17:36:55.028909] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3e0bd3434f00 00:17:59.438 [2024-07-15 17:36:55.028916] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:59.438 [2024-07-15 17:36:55.028957] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3e0bd3497e20 00:17:59.438 [2024-07-15 17:36:55.028975] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3e0bd3434f00 00:17:59.438 [2024-07-15 17:36:55.028979] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3e0bd3434f00 00:17:59.438 [2024-07-15 17:36:55.028992] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:59.438 17:36:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:59.438 17:36:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:59.438 17:36:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:59.438 17:36:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:59.438 17:36:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:59.438 17:36:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:59.438 17:36:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:59.438 17:36:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:59.438 17:36:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:59.438 17:36:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:59.438 17:36:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.438 17:36:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.696 17:36:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:59.696 "name": "raid_bdev1", 00:17:59.696 "uuid": "d410e39c-42d0-11ef-96ac-773515fba644", 00:17:59.696 "strip_size_kb": 0, 00:17:59.696 "state": "online", 00:17:59.696 "raid_level": "raid1", 00:17:59.696 "superblock": true, 00:17:59.696 "num_base_bdevs": 2, 00:17:59.696 "num_base_bdevs_discovered": 2, 00:17:59.696 "num_base_bdevs_operational": 2, 00:17:59.696 "base_bdevs_list": [ 00:17:59.696 { 00:17:59.696 "name": "pt1", 00:17:59.696 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:59.696 "is_configured": true, 00:17:59.696 "data_offset": 256, 00:17:59.696 "data_size": 7936 00:17:59.696 }, 00:17:59.696 { 00:17:59.696 "name": "pt2", 00:17:59.696 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:59.696 "is_configured": true, 00:17:59.696 "data_offset": 256, 00:17:59.696 "data_size": 7936 00:17:59.696 } 00:17:59.696 ] 00:17:59.696 }' 00:17:59.696 17:36:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:59.696 17:36:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.954 17:36:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:17:59.954 17:36:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:59.954 17:36:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:59.954 17:36:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:59.954 17:36:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:59.954 17:36:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:17:59.954 17:36:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:59.954 17:36:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:00.212 [2024-07-15 17:36:55.912283] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:00.212 17:36:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:00.212 "name": "raid_bdev1", 00:18:00.212 "aliases": [ 00:18:00.212 "d410e39c-42d0-11ef-96ac-773515fba644" 00:18:00.212 ], 00:18:00.212 "product_name": "Raid Volume", 00:18:00.212 "block_size": 4128, 00:18:00.212 "num_blocks": 7936, 00:18:00.212 "uuid": "d410e39c-42d0-11ef-96ac-773515fba644", 00:18:00.212 "md_size": 32, 00:18:00.212 "md_interleave": true, 00:18:00.212 "dif_type": 0, 00:18:00.212 "assigned_rate_limits": { 00:18:00.212 "rw_ios_per_sec": 0, 00:18:00.212 "rw_mbytes_per_sec": 0, 00:18:00.212 "r_mbytes_per_sec": 0, 00:18:00.212 "w_mbytes_per_sec": 0 00:18:00.212 }, 00:18:00.212 "claimed": false, 00:18:00.212 "zoned": false, 00:18:00.212 "supported_io_types": { 00:18:00.212 "read": true, 00:18:00.212 "write": true, 00:18:00.212 "unmap": false, 00:18:00.212 "flush": false, 00:18:00.212 "reset": true, 00:18:00.212 "nvme_admin": false, 00:18:00.212 "nvme_io": false, 00:18:00.212 "nvme_io_md": false, 00:18:00.212 "write_zeroes": true, 00:18:00.212 "zcopy": false, 00:18:00.212 "get_zone_info": false, 00:18:00.212 "zone_management": false, 00:18:00.212 "zone_append": false, 00:18:00.212 "compare": false, 00:18:00.212 "compare_and_write": false, 00:18:00.212 "abort": false, 00:18:00.212 "seek_hole": false, 00:18:00.212 "seek_data": false, 00:18:00.212 "copy": false, 00:18:00.212 "nvme_iov_md": false 00:18:00.212 }, 00:18:00.212 "memory_domains": [ 00:18:00.212 { 00:18:00.212 "dma_device_id": "system", 00:18:00.212 "dma_device_type": 1 00:18:00.212 }, 00:18:00.212 { 00:18:00.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.212 "dma_device_type": 2 00:18:00.212 }, 00:18:00.212 { 00:18:00.212 "dma_device_id": "system", 00:18:00.212 "dma_device_type": 1 00:18:00.212 }, 00:18:00.212 { 00:18:00.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.212 "dma_device_type": 2 00:18:00.212 } 00:18:00.212 ], 00:18:00.212 "driver_specific": { 00:18:00.212 "raid": { 00:18:00.212 "uuid": "d410e39c-42d0-11ef-96ac-773515fba644", 00:18:00.212 "strip_size_kb": 0, 00:18:00.212 "state": "online", 00:18:00.212 "raid_level": "raid1", 00:18:00.212 "superblock": true, 00:18:00.212 "num_base_bdevs": 2, 00:18:00.212 "num_base_bdevs_discovered": 2, 00:18:00.212 "num_base_bdevs_operational": 2, 00:18:00.212 "base_bdevs_list": [ 00:18:00.212 { 00:18:00.213 "name": "pt1", 00:18:00.213 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:00.213 "is_configured": true, 00:18:00.213 "data_offset": 256, 00:18:00.213 "data_size": 7936 00:18:00.213 }, 00:18:00.213 { 00:18:00.213 "name": "pt2", 00:18:00.213 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:00.213 "is_configured": true, 00:18:00.213 "data_offset": 256, 00:18:00.213 "data_size": 7936 00:18:00.213 } 00:18:00.213 ] 00:18:00.213 } 00:18:00.213 } 00:18:00.213 }' 00:18:00.213 17:36:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:00.213 17:36:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:00.213 pt2' 00:18:00.213 17:36:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:00.213 17:36:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:00.213 17:36:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:00.501 17:36:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:00.501 "name": "pt1", 00:18:00.501 "aliases": [ 00:18:00.501 "00000000-0000-0000-0000-000000000001" 00:18:00.501 ], 00:18:00.501 "product_name": "passthru", 00:18:00.501 "block_size": 4128, 00:18:00.501 "num_blocks": 8192, 00:18:00.501 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:00.501 "md_size": 32, 00:18:00.501 "md_interleave": true, 00:18:00.501 "dif_type": 0, 00:18:00.501 "assigned_rate_limits": { 00:18:00.501 "rw_ios_per_sec": 0, 00:18:00.501 "rw_mbytes_per_sec": 0, 00:18:00.501 "r_mbytes_per_sec": 0, 00:18:00.501 "w_mbytes_per_sec": 0 00:18:00.501 }, 00:18:00.501 "claimed": true, 00:18:00.501 "claim_type": "exclusive_write", 00:18:00.501 "zoned": false, 00:18:00.501 "supported_io_types": { 00:18:00.501 "read": true, 00:18:00.501 "write": true, 00:18:00.501 "unmap": true, 00:18:00.501 "flush": true, 00:18:00.501 "reset": true, 00:18:00.501 "nvme_admin": false, 00:18:00.501 "nvme_io": false, 00:18:00.501 "nvme_io_md": false, 00:18:00.501 "write_zeroes": true, 00:18:00.501 "zcopy": true, 00:18:00.501 "get_zone_info": false, 00:18:00.501 "zone_management": false, 00:18:00.501 "zone_append": false, 00:18:00.501 "compare": false, 00:18:00.501 "compare_and_write": false, 00:18:00.501 "abort": true, 00:18:00.501 "seek_hole": false, 00:18:00.501 "seek_data": false, 00:18:00.501 "copy": true, 00:18:00.501 "nvme_iov_md": false 00:18:00.501 }, 00:18:00.501 "memory_domains": [ 00:18:00.501 { 00:18:00.501 "dma_device_id": "system", 00:18:00.501 "dma_device_type": 1 00:18:00.501 }, 00:18:00.501 { 00:18:00.501 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.501 "dma_device_type": 2 00:18:00.501 } 00:18:00.501 ], 00:18:00.501 "driver_specific": { 00:18:00.501 "passthru": { 00:18:00.501 "name": "pt1", 00:18:00.501 "base_bdev_name": "malloc1" 00:18:00.501 } 00:18:00.501 } 00:18:00.501 }' 00:18:00.501 17:36:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:00.501 17:36:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:00.501 17:36:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:18:00.501 17:36:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:00.501 17:36:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:00.501 17:36:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:18:00.501 17:36:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:00.501 17:36:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:00.501 17:36:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:18:00.501 17:36:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:00.501 17:36:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:00.501 17:36:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:18:00.501 17:36:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:00.501 17:36:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:00.501 17:36:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:00.758 17:36:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:00.758 "name": "pt2", 00:18:00.758 "aliases": [ 00:18:00.758 "00000000-0000-0000-0000-000000000002" 00:18:00.758 ], 00:18:00.758 "product_name": "passthru", 00:18:00.758 "block_size": 4128, 00:18:00.758 "num_blocks": 8192, 00:18:00.758 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:00.758 "md_size": 32, 00:18:00.758 "md_interleave": true, 00:18:00.758 "dif_type": 0, 00:18:00.758 "assigned_rate_limits": { 00:18:00.758 "rw_ios_per_sec": 0, 00:18:00.758 "rw_mbytes_per_sec": 0, 00:18:00.758 "r_mbytes_per_sec": 0, 00:18:00.759 "w_mbytes_per_sec": 0 00:18:00.759 }, 00:18:00.759 "claimed": true, 00:18:00.759 "claim_type": "exclusive_write", 00:18:00.759 "zoned": false, 00:18:00.759 "supported_io_types": { 00:18:00.759 "read": true, 00:18:00.759 "write": true, 00:18:00.759 "unmap": true, 00:18:00.759 "flush": true, 00:18:00.759 "reset": true, 00:18:00.759 "nvme_admin": false, 00:18:00.759 "nvme_io": false, 00:18:00.759 "nvme_io_md": false, 00:18:00.759 "write_zeroes": true, 00:18:00.759 "zcopy": true, 00:18:00.759 "get_zone_info": false, 00:18:00.759 "zone_management": false, 00:18:00.759 "zone_append": false, 00:18:00.759 "compare": false, 00:18:00.759 "compare_and_write": false, 00:18:00.759 "abort": true, 00:18:00.759 "seek_hole": false, 00:18:00.759 "seek_data": false, 00:18:00.759 "copy": true, 00:18:00.759 "nvme_iov_md": false 00:18:00.759 }, 00:18:00.759 "memory_domains": [ 00:18:00.759 { 00:18:00.759 "dma_device_id": "system", 00:18:00.759 "dma_device_type": 1 00:18:00.759 }, 00:18:00.759 { 00:18:00.759 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:00.759 "dma_device_type": 2 00:18:00.759 } 00:18:00.759 ], 00:18:00.759 "driver_specific": { 00:18:00.759 "passthru": { 00:18:00.759 "name": "pt2", 00:18:00.759 "base_bdev_name": "malloc2" 00:18:00.759 } 00:18:00.759 } 00:18:00.759 }' 00:18:00.759 17:36:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:00.759 17:36:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:01.018 17:36:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:18:01.018 17:36:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:01.018 17:36:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:01.018 17:36:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:18:01.018 17:36:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:01.018 17:36:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:01.018 17:36:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:18:01.018 17:36:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:01.018 17:36:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:01.018 17:36:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:18:01.018 17:36:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:18:01.018 17:36:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:01.280 [2024-07-15 17:36:56.892287] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:01.280 17:36:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=d410e39c-42d0-11ef-96ac-773515fba644 00:18:01.280 17:36:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # '[' -z d410e39c-42d0-11ef-96ac-773515fba644 ']' 00:18:01.280 17:36:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:01.537 [2024-07-15 17:36:57.196239] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:01.537 [2024-07-15 17:36:57.196265] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:01.537 [2024-07-15 17:36:57.196289] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:01.537 [2024-07-15 17:36:57.196305] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:01.537 [2024-07-15 17:36:57.196309] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3e0bd3434f00 name raid_bdev1, state offline 00:18:01.537 17:36:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.537 17:36:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:18:01.795 17:36:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:18:01.795 17:36:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:18:01.795 17:36:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:01.795 17:36:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:02.054 17:36:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:18:02.054 17:36:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:02.311 17:36:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:18:02.311 17:36:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:02.568 17:36:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:18:02.568 17:36:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:18:02.568 17:36:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:18:02.568 17:36:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:18:02.568 17:36:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:02.568 17:36:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:02.568 17:36:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:02.568 17:36:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:02.568 17:36:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:02.568 17:36:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:02.568 17:36:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:02.569 17:36:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:02.569 17:36:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:18:02.825 [2024-07-15 17:36:58.472297] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:02.825 [2024-07-15 17:36:58.472905] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:02.825 [2024-07-15 17:36:58.472933] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:02.825 [2024-07-15 17:36:58.472972] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:02.825 [2024-07-15 17:36:58.472983] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:02.825 [2024-07-15 17:36:58.472987] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3e0bd3434c80 name raid_bdev1, state configuring 00:18:02.825 request: 00:18:02.825 { 00:18:02.825 "name": "raid_bdev1", 00:18:02.825 "raid_level": "raid1", 00:18:02.825 "base_bdevs": [ 00:18:02.825 "malloc1", 00:18:02.825 "malloc2" 00:18:02.825 ], 00:18:02.825 "superblock": false, 00:18:02.825 "method": "bdev_raid_create", 00:18:02.825 "req_id": 1 00:18:02.825 } 00:18:02.825 Got JSON-RPC error response 00:18:02.825 response: 00:18:02.825 { 00:18:02.825 "code": -17, 00:18:02.825 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:02.825 } 00:18:02.825 17:36:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:18:02.825 17:36:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:02.825 17:36:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:02.825 17:36:58 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:02.825 17:36:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.825 17:36:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:18:03.082 17:36:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:18:03.082 17:36:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:18:03.082 17:36:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:03.340 [2024-07-15 17:36:58.948310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:03.340 [2024-07-15 17:36:58.948405] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.340 [2024-07-15 17:36:58.948428] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e0bd3434780 00:18:03.340 [2024-07-15 17:36:58.948436] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.340 [2024-07-15 17:36:58.949054] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.340 [2024-07-15 17:36:58.949081] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:03.340 [2024-07-15 17:36:58.949102] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:03.340 [2024-07-15 17:36:58.949115] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:03.340 pt1 00:18:03.340 17:36:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:03.340 17:36:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:03.340 17:36:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:18:03.340 17:36:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:03.340 17:36:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:03.340 17:36:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:03.340 17:36:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:03.340 17:36:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:03.340 17:36:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:03.340 17:36:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:03.340 17:36:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.340 17:36:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.599 17:36:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:03.599 "name": "raid_bdev1", 00:18:03.599 "uuid": "d410e39c-42d0-11ef-96ac-773515fba644", 00:18:03.599 "strip_size_kb": 0, 00:18:03.599 "state": "configuring", 00:18:03.599 "raid_level": "raid1", 00:18:03.599 "superblock": true, 00:18:03.599 "num_base_bdevs": 2, 00:18:03.599 "num_base_bdevs_discovered": 1, 00:18:03.599 "num_base_bdevs_operational": 2, 00:18:03.599 "base_bdevs_list": [ 00:18:03.599 { 00:18:03.599 "name": "pt1", 00:18:03.599 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:03.599 "is_configured": true, 00:18:03.599 "data_offset": 256, 00:18:03.599 "data_size": 7936 00:18:03.599 }, 00:18:03.599 { 00:18:03.599 "name": null, 00:18:03.599 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:03.599 "is_configured": false, 00:18:03.599 "data_offset": 256, 00:18:03.599 "data_size": 7936 00:18:03.599 } 00:18:03.599 ] 00:18:03.599 }' 00:18:03.599 17:36:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:03.599 17:36:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:03.857 17:36:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:18:03.857 17:36:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:18:03.857 17:36:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:03.857 17:36:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:04.116 [2024-07-15 17:36:59.808361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:04.116 [2024-07-15 17:36:59.808419] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:04.116 [2024-07-15 17:36:59.808432] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e0bd3434f00 00:18:04.116 [2024-07-15 17:36:59.808440] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:04.116 [2024-07-15 17:36:59.808498] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:04.116 [2024-07-15 17:36:59.808508] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:04.116 [2024-07-15 17:36:59.808527] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:04.116 [2024-07-15 17:36:59.808536] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:04.116 [2024-07-15 17:36:59.808559] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3e0bd3435180 00:18:04.116 [2024-07-15 17:36:59.808563] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:04.116 [2024-07-15 17:36:59.808602] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3e0bd3497e20 00:18:04.116 [2024-07-15 17:36:59.808624] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3e0bd3435180 00:18:04.116 [2024-07-15 17:36:59.808628] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3e0bd3435180 00:18:04.116 [2024-07-15 17:36:59.808642] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:04.116 pt2 00:18:04.116 17:36:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:18:04.116 17:36:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:18:04.116 17:36:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:04.116 17:36:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:04.116 17:36:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:04.116 17:36:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:04.116 17:36:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:04.116 17:36:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:04.116 17:36:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:04.116 17:36:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:04.116 17:36:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:04.116 17:36:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:04.116 17:36:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.116 17:36:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.375 17:37:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:04.375 "name": "raid_bdev1", 00:18:04.375 "uuid": "d410e39c-42d0-11ef-96ac-773515fba644", 00:18:04.375 "strip_size_kb": 0, 00:18:04.375 "state": "online", 00:18:04.375 "raid_level": "raid1", 00:18:04.375 "superblock": true, 00:18:04.375 "num_base_bdevs": 2, 00:18:04.375 "num_base_bdevs_discovered": 2, 00:18:04.375 "num_base_bdevs_operational": 2, 00:18:04.375 "base_bdevs_list": [ 00:18:04.375 { 00:18:04.375 "name": "pt1", 00:18:04.375 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:04.375 "is_configured": true, 00:18:04.375 "data_offset": 256, 00:18:04.375 "data_size": 7936 00:18:04.375 }, 00:18:04.375 { 00:18:04.375 "name": "pt2", 00:18:04.375 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:04.375 "is_configured": true, 00:18:04.375 "data_offset": 256, 00:18:04.375 "data_size": 7936 00:18:04.375 } 00:18:04.375 ] 00:18:04.375 }' 00:18:04.375 17:37:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:04.375 17:37:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.633 17:37:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:18:04.633 17:37:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:18:04.633 17:37:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:18:04.633 17:37:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:18:04.633 17:37:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:18:04.633 17:37:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:18:04.633 17:37:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:04.633 17:37:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:18:04.891 [2024-07-15 17:37:00.720462] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:05.148 17:37:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:18:05.148 "name": "raid_bdev1", 00:18:05.148 "aliases": [ 00:18:05.148 "d410e39c-42d0-11ef-96ac-773515fba644" 00:18:05.148 ], 00:18:05.148 "product_name": "Raid Volume", 00:18:05.148 "block_size": 4128, 00:18:05.148 "num_blocks": 7936, 00:18:05.149 "uuid": "d410e39c-42d0-11ef-96ac-773515fba644", 00:18:05.149 "md_size": 32, 00:18:05.149 "md_interleave": true, 00:18:05.149 "dif_type": 0, 00:18:05.149 "assigned_rate_limits": { 00:18:05.149 "rw_ios_per_sec": 0, 00:18:05.149 "rw_mbytes_per_sec": 0, 00:18:05.149 "r_mbytes_per_sec": 0, 00:18:05.149 "w_mbytes_per_sec": 0 00:18:05.149 }, 00:18:05.149 "claimed": false, 00:18:05.149 "zoned": false, 00:18:05.149 "supported_io_types": { 00:18:05.149 "read": true, 00:18:05.149 "write": true, 00:18:05.149 "unmap": false, 00:18:05.149 "flush": false, 00:18:05.149 "reset": true, 00:18:05.149 "nvme_admin": false, 00:18:05.149 "nvme_io": false, 00:18:05.149 "nvme_io_md": false, 00:18:05.149 "write_zeroes": true, 00:18:05.149 "zcopy": false, 00:18:05.149 "get_zone_info": false, 00:18:05.149 "zone_management": false, 00:18:05.149 "zone_append": false, 00:18:05.149 "compare": false, 00:18:05.149 "compare_and_write": false, 00:18:05.149 "abort": false, 00:18:05.149 "seek_hole": false, 00:18:05.149 "seek_data": false, 00:18:05.149 "copy": false, 00:18:05.149 "nvme_iov_md": false 00:18:05.149 }, 00:18:05.149 "memory_domains": [ 00:18:05.149 { 00:18:05.149 "dma_device_id": "system", 00:18:05.149 "dma_device_type": 1 00:18:05.149 }, 00:18:05.149 { 00:18:05.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.149 "dma_device_type": 2 00:18:05.149 }, 00:18:05.149 { 00:18:05.149 "dma_device_id": "system", 00:18:05.149 "dma_device_type": 1 00:18:05.149 }, 00:18:05.149 { 00:18:05.149 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.149 "dma_device_type": 2 00:18:05.149 } 00:18:05.149 ], 00:18:05.149 "driver_specific": { 00:18:05.149 "raid": { 00:18:05.149 "uuid": "d410e39c-42d0-11ef-96ac-773515fba644", 00:18:05.149 "strip_size_kb": 0, 00:18:05.149 "state": "online", 00:18:05.149 "raid_level": "raid1", 00:18:05.149 "superblock": true, 00:18:05.149 "num_base_bdevs": 2, 00:18:05.149 "num_base_bdevs_discovered": 2, 00:18:05.149 "num_base_bdevs_operational": 2, 00:18:05.149 "base_bdevs_list": [ 00:18:05.149 { 00:18:05.149 "name": "pt1", 00:18:05.149 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:05.149 "is_configured": true, 00:18:05.149 "data_offset": 256, 00:18:05.149 "data_size": 7936 00:18:05.149 }, 00:18:05.149 { 00:18:05.149 "name": "pt2", 00:18:05.149 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:05.149 "is_configured": true, 00:18:05.149 "data_offset": 256, 00:18:05.149 "data_size": 7936 00:18:05.149 } 00:18:05.149 ] 00:18:05.149 } 00:18:05.149 } 00:18:05.149 }' 00:18:05.149 17:37:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:05.149 17:37:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:18:05.149 pt2' 00:18:05.149 17:37:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:05.149 17:37:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:18:05.149 17:37:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:05.406 17:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:05.406 "name": "pt1", 00:18:05.406 "aliases": [ 00:18:05.406 "00000000-0000-0000-0000-000000000001" 00:18:05.406 ], 00:18:05.406 "product_name": "passthru", 00:18:05.406 "block_size": 4128, 00:18:05.406 "num_blocks": 8192, 00:18:05.406 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:05.406 "md_size": 32, 00:18:05.406 "md_interleave": true, 00:18:05.406 "dif_type": 0, 00:18:05.406 "assigned_rate_limits": { 00:18:05.406 "rw_ios_per_sec": 0, 00:18:05.406 "rw_mbytes_per_sec": 0, 00:18:05.406 "r_mbytes_per_sec": 0, 00:18:05.406 "w_mbytes_per_sec": 0 00:18:05.406 }, 00:18:05.406 "claimed": true, 00:18:05.406 "claim_type": "exclusive_write", 00:18:05.406 "zoned": false, 00:18:05.406 "supported_io_types": { 00:18:05.406 "read": true, 00:18:05.406 "write": true, 00:18:05.406 "unmap": true, 00:18:05.406 "flush": true, 00:18:05.406 "reset": true, 00:18:05.406 "nvme_admin": false, 00:18:05.406 "nvme_io": false, 00:18:05.406 "nvme_io_md": false, 00:18:05.406 "write_zeroes": true, 00:18:05.406 "zcopy": true, 00:18:05.406 "get_zone_info": false, 00:18:05.406 "zone_management": false, 00:18:05.406 "zone_append": false, 00:18:05.406 "compare": false, 00:18:05.406 "compare_and_write": false, 00:18:05.406 "abort": true, 00:18:05.406 "seek_hole": false, 00:18:05.406 "seek_data": false, 00:18:05.406 "copy": true, 00:18:05.406 "nvme_iov_md": false 00:18:05.406 }, 00:18:05.406 "memory_domains": [ 00:18:05.406 { 00:18:05.406 "dma_device_id": "system", 00:18:05.406 "dma_device_type": 1 00:18:05.406 }, 00:18:05.406 { 00:18:05.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.406 "dma_device_type": 2 00:18:05.406 } 00:18:05.406 ], 00:18:05.406 "driver_specific": { 00:18:05.406 "passthru": { 00:18:05.406 "name": "pt1", 00:18:05.406 "base_bdev_name": "malloc1" 00:18:05.406 } 00:18:05.406 } 00:18:05.406 }' 00:18:05.406 17:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:05.406 17:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:05.406 17:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:18:05.406 17:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:05.406 17:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:05.406 17:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:18:05.406 17:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:05.406 17:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:05.406 17:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:18:05.406 17:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:05.406 17:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:05.406 17:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:18:05.406 17:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:18:05.406 17:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:18:05.406 17:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:18:05.664 17:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:18:05.664 "name": "pt2", 00:18:05.664 "aliases": [ 00:18:05.664 "00000000-0000-0000-0000-000000000002" 00:18:05.664 ], 00:18:05.664 "product_name": "passthru", 00:18:05.664 "block_size": 4128, 00:18:05.664 "num_blocks": 8192, 00:18:05.664 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:05.664 "md_size": 32, 00:18:05.664 "md_interleave": true, 00:18:05.664 "dif_type": 0, 00:18:05.664 "assigned_rate_limits": { 00:18:05.664 "rw_ios_per_sec": 0, 00:18:05.664 "rw_mbytes_per_sec": 0, 00:18:05.664 "r_mbytes_per_sec": 0, 00:18:05.664 "w_mbytes_per_sec": 0 00:18:05.664 }, 00:18:05.664 "claimed": true, 00:18:05.664 "claim_type": "exclusive_write", 00:18:05.664 "zoned": false, 00:18:05.664 "supported_io_types": { 00:18:05.664 "read": true, 00:18:05.664 "write": true, 00:18:05.664 "unmap": true, 00:18:05.664 "flush": true, 00:18:05.664 "reset": true, 00:18:05.664 "nvme_admin": false, 00:18:05.664 "nvme_io": false, 00:18:05.664 "nvme_io_md": false, 00:18:05.664 "write_zeroes": true, 00:18:05.664 "zcopy": true, 00:18:05.664 "get_zone_info": false, 00:18:05.664 "zone_management": false, 00:18:05.664 "zone_append": false, 00:18:05.664 "compare": false, 00:18:05.664 "compare_and_write": false, 00:18:05.664 "abort": true, 00:18:05.664 "seek_hole": false, 00:18:05.664 "seek_data": false, 00:18:05.664 "copy": true, 00:18:05.664 "nvme_iov_md": false 00:18:05.664 }, 00:18:05.664 "memory_domains": [ 00:18:05.664 { 00:18:05.664 "dma_device_id": "system", 00:18:05.664 "dma_device_type": 1 00:18:05.664 }, 00:18:05.664 { 00:18:05.664 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:05.664 "dma_device_type": 2 00:18:05.664 } 00:18:05.664 ], 00:18:05.664 "driver_specific": { 00:18:05.664 "passthru": { 00:18:05.664 "name": "pt2", 00:18:05.664 "base_bdev_name": "malloc2" 00:18:05.664 } 00:18:05.664 } 00:18:05.664 }' 00:18:05.664 17:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:05.664 17:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:18:05.664 17:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:18:05.664 17:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:05.664 17:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:18:05.664 17:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:18:05.664 17:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:05.664 17:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:18:05.923 17:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:18:05.923 17:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:05.924 17:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:18:05.924 17:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:18:05.924 17:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:05.924 17:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:18:06.182 [2024-07-15 17:37:01.788481] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:06.182 17:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # '[' d410e39c-42d0-11ef-96ac-773515fba644 '!=' d410e39c-42d0-11ef-96ac-773515fba644 ']' 00:18:06.182 17:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:18:06.182 17:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:18:06.182 17:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:18:06.182 17:37:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:18:06.441 [2024-07-15 17:37:02.024465] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:06.441 17:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:06.441 17:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:06.441 17:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:06.441 17:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:06.441 17:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:06.441 17:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:06.441 17:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:06.441 17:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:06.441 17:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:06.441 17:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:06.441 17:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.441 17:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.699 17:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:06.699 "name": "raid_bdev1", 00:18:06.699 "uuid": "d410e39c-42d0-11ef-96ac-773515fba644", 00:18:06.699 "strip_size_kb": 0, 00:18:06.699 "state": "online", 00:18:06.699 "raid_level": "raid1", 00:18:06.699 "superblock": true, 00:18:06.699 "num_base_bdevs": 2, 00:18:06.699 "num_base_bdevs_discovered": 1, 00:18:06.699 "num_base_bdevs_operational": 1, 00:18:06.699 "base_bdevs_list": [ 00:18:06.699 { 00:18:06.699 "name": null, 00:18:06.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.699 "is_configured": false, 00:18:06.699 "data_offset": 256, 00:18:06.699 "data_size": 7936 00:18:06.699 }, 00:18:06.699 { 00:18:06.699 "name": "pt2", 00:18:06.699 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:06.699 "is_configured": true, 00:18:06.699 "data_offset": 256, 00:18:06.699 "data_size": 7936 00:18:06.699 } 00:18:06.699 ] 00:18:06.699 }' 00:18:06.699 17:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:06.699 17:37:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.957 17:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:07.215 [2024-07-15 17:37:02.900457] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:07.215 [2024-07-15 17:37:02.900488] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:07.215 [2024-07-15 17:37:02.900512] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:07.215 [2024-07-15 17:37:02.900525] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:07.215 [2024-07-15 17:37:02.900529] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3e0bd3435180 name raid_bdev1, state offline 00:18:07.215 17:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.215 17:37:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:18:07.473 17:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:18:07.473 17:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:18:07.473 17:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:18:07.473 17:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:07.473 17:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:18:07.731 17:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:18:07.731 17:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:18:07.731 17:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:18:07.731 17:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:18:07.731 17:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@518 -- # i=1 00:18:07.731 17:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:07.990 [2024-07-15 17:37:03.668520] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:07.990 [2024-07-15 17:37:03.668605] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:07.990 [2024-07-15 17:37:03.668618] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e0bd3434f00 00:18:07.990 [2024-07-15 17:37:03.668626] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:07.990 [2024-07-15 17:37:03.669251] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:07.990 [2024-07-15 17:37:03.669305] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:07.990 [2024-07-15 17:37:03.669326] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:07.990 [2024-07-15 17:37:03.669339] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:07.990 [2024-07-15 17:37:03.669359] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3e0bd3435180 00:18:07.990 [2024-07-15 17:37:03.669363] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:07.990 [2024-07-15 17:37:03.669397] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3e0bd3497e20 00:18:07.990 [2024-07-15 17:37:03.669410] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3e0bd3435180 00:18:07.990 [2024-07-15 17:37:03.669414] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3e0bd3435180 00:18:07.990 [2024-07-15 17:37:03.669426] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:07.990 pt2 00:18:07.990 17:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:07.990 17:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:07.990 17:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:07.990 17:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:07.990 17:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:07.990 17:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:07.990 17:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:07.990 17:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:07.990 17:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:07.990 17:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:07.990 17:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:07.990 17:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:08.248 17:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:08.248 "name": "raid_bdev1", 00:18:08.248 "uuid": "d410e39c-42d0-11ef-96ac-773515fba644", 00:18:08.248 "strip_size_kb": 0, 00:18:08.248 "state": "online", 00:18:08.248 "raid_level": "raid1", 00:18:08.248 "superblock": true, 00:18:08.248 "num_base_bdevs": 2, 00:18:08.248 "num_base_bdevs_discovered": 1, 00:18:08.248 "num_base_bdevs_operational": 1, 00:18:08.248 "base_bdevs_list": [ 00:18:08.248 { 00:18:08.248 "name": null, 00:18:08.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.248 "is_configured": false, 00:18:08.248 "data_offset": 256, 00:18:08.248 "data_size": 7936 00:18:08.248 }, 00:18:08.248 { 00:18:08.248 "name": "pt2", 00:18:08.248 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:08.248 "is_configured": true, 00:18:08.248 "data_offset": 256, 00:18:08.248 "data_size": 7936 00:18:08.248 } 00:18:08.248 ] 00:18:08.248 }' 00:18:08.248 17:37:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:08.248 17:37:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:08.507 17:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:08.764 [2024-07-15 17:37:04.428528] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:08.764 [2024-07-15 17:37:04.428554] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:08.764 [2024-07-15 17:37:04.428631] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:08.764 [2024-07-15 17:37:04.428645] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:08.764 [2024-07-15 17:37:04.428649] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3e0bd3435180 name raid_bdev1, state offline 00:18:08.764 17:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:18:08.764 17:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.022 17:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:18:09.022 17:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:18:09.022 17:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:18:09.022 17:37:04 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:09.281 [2024-07-15 17:37:05.000594] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:09.281 [2024-07-15 17:37:05.000663] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:09.281 [2024-07-15 17:37:05.000675] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e0bd3434c80 00:18:09.281 [2024-07-15 17:37:05.000683] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:09.281 [2024-07-15 17:37:05.001271] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:09.281 [2024-07-15 17:37:05.001297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:09.281 [2024-07-15 17:37:05.001318] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:09.281 [2024-07-15 17:37:05.001331] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:09.281 [2024-07-15 17:37:05.001353] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:09.281 [2024-07-15 17:37:05.001358] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:09.281 [2024-07-15 17:37:05.001364] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3e0bd3434780 name raid_bdev1, state configuring 00:18:09.281 [2024-07-15 17:37:05.001374] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:09.281 [2024-07-15 17:37:05.001390] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3e0bd3434780 00:18:09.281 [2024-07-15 17:37:05.001394] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:09.281 [2024-07-15 17:37:05.001414] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3e0bd3497e20 00:18:09.281 [2024-07-15 17:37:05.001426] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3e0bd3434780 00:18:09.281 [2024-07-15 17:37:05.001430] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3e0bd3434780 00:18:09.281 [2024-07-15 17:37:05.001440] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.281 pt1 00:18:09.281 17:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:18:09.281 17:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:09.281 17:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:09.281 17:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:09.281 17:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:09.281 17:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:09.281 17:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:09.281 17:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:09.281 17:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:09.281 17:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:09.281 17:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:09.281 17:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.281 17:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.540 17:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:09.540 "name": "raid_bdev1", 00:18:09.540 "uuid": "d410e39c-42d0-11ef-96ac-773515fba644", 00:18:09.540 "strip_size_kb": 0, 00:18:09.540 "state": "online", 00:18:09.540 "raid_level": "raid1", 00:18:09.540 "superblock": true, 00:18:09.540 "num_base_bdevs": 2, 00:18:09.540 "num_base_bdevs_discovered": 1, 00:18:09.540 "num_base_bdevs_operational": 1, 00:18:09.540 "base_bdevs_list": [ 00:18:09.540 { 00:18:09.540 "name": null, 00:18:09.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.540 "is_configured": false, 00:18:09.540 "data_offset": 256, 00:18:09.540 "data_size": 7936 00:18:09.540 }, 00:18:09.540 { 00:18:09.540 "name": "pt2", 00:18:09.540 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:09.540 "is_configured": true, 00:18:09.540 "data_offset": 256, 00:18:09.540 "data_size": 7936 00:18:09.540 } 00:18:09.540 ] 00:18:09.540 }' 00:18:09.540 17:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:09.540 17:37:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.798 17:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:18:09.798 17:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:10.365 17:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:18:10.365 17:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:10.365 17:37:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:18:10.365 [2024-07-15 17:37:06.144673] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:10.365 17:37:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' d410e39c-42d0-11ef-96ac-773515fba644 '!=' d410e39c-42d0-11ef-96ac-773515fba644 ']' 00:18:10.365 17:37:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@562 -- # killprocess 67142 00:18:10.365 17:37:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 67142 ']' 00:18:10.365 17:37:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 67142 00:18:10.365 17:37:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:18:10.365 17:37:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:18:10.365 17:37:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps -c -o command 67142 00:18:10.365 17:37:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # tail -1 00:18:10.365 17:37:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:18:10.365 killing process with pid 67142 00:18:10.365 17:37:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:18:10.365 17:37:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67142' 00:18:10.365 17:37:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@967 -- # kill 67142 00:18:10.365 [2024-07-15 17:37:06.172423] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:10.365 [2024-07-15 17:37:06.172457] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:10.365 17:37:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # wait 67142 00:18:10.365 [2024-07-15 17:37:06.172471] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:10.365 [2024-07-15 17:37:06.172476] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3e0bd3434780 name raid_bdev1, state offline 00:18:10.365 [2024-07-15 17:37:06.184598] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:10.638 17:37:06 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@564 -- # return 0 00:18:10.638 00:18:10.638 real 0m13.783s 00:18:10.638 user 0m24.691s 00:18:10.638 sys 0m2.095s 00:18:10.638 17:37:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:10.638 17:37:06 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.638 ************************************ 00:18:10.638 END TEST raid_superblock_test_md_interleaved 00:18:10.638 ************************************ 00:18:10.638 17:37:06 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:10.638 17:37:06 bdev_raid -- bdev/bdev_raid.sh@914 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:18:10.638 17:37:06 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:18:10.638 17:37:06 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:10.638 17:37:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:10.638 ************************************ 00:18:10.638 START TEST raid_rebuild_test_sb_md_interleaved 00:18:10.638 ************************************ 00:18:10.638 17:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false false 00:18:10.638 17:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:18:10.638 17:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:18:10.638 17:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:18:10.638 17:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:18:10.638 17:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local verify=false 00:18:10.638 17:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:18:10.638 17:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:18:10.638 17:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:18:10.638 17:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:18:10.638 17:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:18:10.638 17:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:18:10.638 17:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:18:10.638 17:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:18:10.638 17:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:10.638 17:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:18:10.638 17:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:18:10.638 17:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local strip_size 00:18:10.638 17:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local create_arg 00:18:10.638 17:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:18:10.638 17:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local data_offset 00:18:10.638 17:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:18:10.638 17:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:18:10.638 17:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:18:10.638 17:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:18:10.638 17:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # raid_pid=67533 00:18:10.638 17:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:10.638 17:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # waitforlisten 67533 /var/tmp/spdk-raid.sock 00:18:10.638 17:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 67533 ']' 00:18:10.638 17:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:18:10.638 17:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:10.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:18:10.638 17:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:18:10.638 17:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:10.638 17:37:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:10.638 [2024-07-15 17:37:06.422891] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:18:10.638 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:10.638 Zero copy mechanism will not be used. 00:18:10.638 [2024-07-15 17:37:06.423074] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:11.252 EAL: TSC is not safe to use in SMP mode 00:18:11.252 EAL: TSC is not invariant 00:18:11.252 [2024-07-15 17:37:06.981659] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.252 [2024-07-15 17:37:07.070829] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:11.252 [2024-07-15 17:37:07.073016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.252 [2024-07-15 17:37:07.073785] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:11.252 [2024-07-15 17:37:07.073800] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:11.822 17:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:11.822 17:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:18:11.822 17:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:18:11.822 17:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:18:12.081 BaseBdev1_malloc 00:18:12.081 17:37:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:12.340 [2024-07-15 17:37:08.102350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:12.340 [2024-07-15 17:37:08.102437] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.340 [2024-07-15 17:37:08.103167] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2d4356834780 00:18:12.340 [2024-07-15 17:37:08.103204] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.340 [2024-07-15 17:37:08.104008] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.340 [2024-07-15 17:37:08.104039] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:12.340 BaseBdev1 00:18:12.340 17:37:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:18:12.340 17:37:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:18:12.598 BaseBdev2_malloc 00:18:12.598 17:37:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:12.857 [2024-07-15 17:37:08.642349] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:12.857 [2024-07-15 17:37:08.642443] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:12.857 [2024-07-15 17:37:08.642480] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2d4356834c80 00:18:12.857 [2024-07-15 17:37:08.642491] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:12.857 [2024-07-15 17:37:08.643282] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:12.857 [2024-07-15 17:37:08.643309] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:12.857 BaseBdev2 00:18:12.857 17:37:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:18:13.115 spare_malloc 00:18:13.115 17:37:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:13.373 spare_delay 00:18:13.373 17:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:13.632 [2024-07-15 17:37:09.410359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:13.632 [2024-07-15 17:37:09.410441] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.632 [2024-07-15 17:37:09.410475] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2d4356835400 00:18:13.632 [2024-07-15 17:37:09.410486] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.632 [2024-07-15 17:37:09.411254] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.632 [2024-07-15 17:37:09.411286] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:13.632 spare 00:18:13.632 17:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:18:13.890 [2024-07-15 17:37:09.678389] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:13.890 [2024-07-15 17:37:09.679163] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:13.890 [2024-07-15 17:37:09.679256] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2d4356835680 00:18:13.890 [2024-07-15 17:37:09.679264] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:13.890 [2024-07-15 17:37:09.679312] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2d4356897e20 00:18:13.890 [2024-07-15 17:37:09.679329] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2d4356835680 00:18:13.890 [2024-07-15 17:37:09.679334] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2d4356835680 00:18:13.890 [2024-07-15 17:37:09.679350] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:13.890 17:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:13.890 17:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:13.890 17:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:13.890 17:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:13.891 17:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:13.891 17:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:13.891 17:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:13.891 17:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:13.891 17:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:13.891 17:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:13.891 17:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:13.891 17:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.173 17:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:14.173 "name": "raid_bdev1", 00:18:14.173 "uuid": "dccc535f-42d0-11ef-96ac-773515fba644", 00:18:14.173 "strip_size_kb": 0, 00:18:14.173 "state": "online", 00:18:14.173 "raid_level": "raid1", 00:18:14.173 "superblock": true, 00:18:14.173 "num_base_bdevs": 2, 00:18:14.173 "num_base_bdevs_discovered": 2, 00:18:14.173 "num_base_bdevs_operational": 2, 00:18:14.173 "base_bdevs_list": [ 00:18:14.173 { 00:18:14.173 "name": "BaseBdev1", 00:18:14.173 "uuid": "da52b3b1-a27a-be51-a947-4646f09da45a", 00:18:14.173 "is_configured": true, 00:18:14.173 "data_offset": 256, 00:18:14.173 "data_size": 7936 00:18:14.173 }, 00:18:14.173 { 00:18:14.173 "name": "BaseBdev2", 00:18:14.173 "uuid": "0b734864-5b16-405c-b4ae-735f634af6a0", 00:18:14.173 "is_configured": true, 00:18:14.173 "data_offset": 256, 00:18:14.173 "data_size": 7936 00:18:14.173 } 00:18:14.173 ] 00:18:14.173 }' 00:18:14.173 17:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:14.173 17:37:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.739 17:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:18:14.739 17:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:18:14.997 [2024-07-15 17:37:10.570448] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:14.997 17:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:18:14.997 17:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.997 17:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:15.255 17:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:18:15.255 17:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:18:15.255 17:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # '[' false = true ']' 00:18:15.255 17:37:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:18:15.255 [2024-07-15 17:37:11.078407] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:15.514 17:37:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:15.514 17:37:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:15.514 17:37:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:15.514 17:37:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:15.514 17:37:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:15.514 17:37:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:15.514 17:37:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:15.514 17:37:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:15.514 17:37:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:15.514 17:37:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:15.514 17:37:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:15.514 17:37:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.772 17:37:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:15.772 "name": "raid_bdev1", 00:18:15.772 "uuid": "dccc535f-42d0-11ef-96ac-773515fba644", 00:18:15.772 "strip_size_kb": 0, 00:18:15.773 "state": "online", 00:18:15.773 "raid_level": "raid1", 00:18:15.773 "superblock": true, 00:18:15.773 "num_base_bdevs": 2, 00:18:15.773 "num_base_bdevs_discovered": 1, 00:18:15.773 "num_base_bdevs_operational": 1, 00:18:15.773 "base_bdevs_list": [ 00:18:15.773 { 00:18:15.773 "name": null, 00:18:15.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.773 "is_configured": false, 00:18:15.773 "data_offset": 256, 00:18:15.773 "data_size": 7936 00:18:15.773 }, 00:18:15.773 { 00:18:15.773 "name": "BaseBdev2", 00:18:15.773 "uuid": "0b734864-5b16-405c-b4ae-735f634af6a0", 00:18:15.773 "is_configured": true, 00:18:15.773 "data_offset": 256, 00:18:15.773 "data_size": 7936 00:18:15.773 } 00:18:15.773 ] 00:18:15.773 }' 00:18:15.773 17:37:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:15.773 17:37:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.031 17:37:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:16.289 [2024-07-15 17:37:12.046431] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:16.289 [2024-07-15 17:37:12.046771] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2d4356897ec0 00:18:16.289 [2024-07-15 17:37:12.047761] bdev_raid.c:2825:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:16.289 17:37:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # sleep 1 00:18:17.660 17:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:17.661 17:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:17.661 17:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:17.661 17:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:17.661 17:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:17.661 17:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.661 17:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:17.661 17:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:17.661 "name": "raid_bdev1", 00:18:17.661 "uuid": "dccc535f-42d0-11ef-96ac-773515fba644", 00:18:17.661 "strip_size_kb": 0, 00:18:17.661 "state": "online", 00:18:17.661 "raid_level": "raid1", 00:18:17.661 "superblock": true, 00:18:17.661 "num_base_bdevs": 2, 00:18:17.661 "num_base_bdevs_discovered": 2, 00:18:17.661 "num_base_bdevs_operational": 2, 00:18:17.661 "process": { 00:18:17.661 "type": "rebuild", 00:18:17.661 "target": "spare", 00:18:17.661 "progress": { 00:18:17.661 "blocks": 3328, 00:18:17.661 "percent": 41 00:18:17.661 } 00:18:17.661 }, 00:18:17.661 "base_bdevs_list": [ 00:18:17.661 { 00:18:17.661 "name": "spare", 00:18:17.661 "uuid": "5e81f82d-eb10-1d59-8848-6136b24c0486", 00:18:17.661 "is_configured": true, 00:18:17.661 "data_offset": 256, 00:18:17.661 "data_size": 7936 00:18:17.661 }, 00:18:17.661 { 00:18:17.661 "name": "BaseBdev2", 00:18:17.661 "uuid": "0b734864-5b16-405c-b4ae-735f634af6a0", 00:18:17.661 "is_configured": true, 00:18:17.661 "data_offset": 256, 00:18:17.661 "data_size": 7936 00:18:17.661 } 00:18:17.661 ] 00:18:17.661 }' 00:18:17.661 17:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:17.661 17:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:17.661 17:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:17.661 17:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:17.661 17:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:18:17.918 [2024-07-15 17:37:13.714314] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:18.176 [2024-07-15 17:37:13.758325] bdev_raid.c:2516:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: Operation not supported by device 00:18:18.176 [2024-07-15 17:37:13.758412] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:18.176 [2024-07-15 17:37:13.758420] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:18.176 [2024-07-15 17:37:13.758425] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: Operation not supported by device 00:18:18.176 17:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:18.176 17:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:18.176 17:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:18.176 17:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:18.176 17:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:18.176 17:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:18.176 17:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:18.176 17:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:18.176 17:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:18.176 17:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:18.176 17:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.176 17:37:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.433 17:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:18.433 "name": "raid_bdev1", 00:18:18.433 "uuid": "dccc535f-42d0-11ef-96ac-773515fba644", 00:18:18.433 "strip_size_kb": 0, 00:18:18.433 "state": "online", 00:18:18.433 "raid_level": "raid1", 00:18:18.433 "superblock": true, 00:18:18.433 "num_base_bdevs": 2, 00:18:18.433 "num_base_bdevs_discovered": 1, 00:18:18.433 "num_base_bdevs_operational": 1, 00:18:18.433 "base_bdevs_list": [ 00:18:18.433 { 00:18:18.433 "name": null, 00:18:18.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.433 "is_configured": false, 00:18:18.433 "data_offset": 256, 00:18:18.433 "data_size": 7936 00:18:18.433 }, 00:18:18.433 { 00:18:18.433 "name": "BaseBdev2", 00:18:18.433 "uuid": "0b734864-5b16-405c-b4ae-735f634af6a0", 00:18:18.433 "is_configured": true, 00:18:18.433 "data_offset": 256, 00:18:18.433 "data_size": 7936 00:18:18.433 } 00:18:18.434 ] 00:18:18.434 }' 00:18:18.434 17:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:18.434 17:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:18.691 17:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:18.691 17:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:18.691 17:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:18.691 17:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:18.691 17:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:18.691 17:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:18.691 17:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:18.949 17:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:18.949 "name": "raid_bdev1", 00:18:18.949 "uuid": "dccc535f-42d0-11ef-96ac-773515fba644", 00:18:18.949 "strip_size_kb": 0, 00:18:18.949 "state": "online", 00:18:18.949 "raid_level": "raid1", 00:18:18.949 "superblock": true, 00:18:18.949 "num_base_bdevs": 2, 00:18:18.949 "num_base_bdevs_discovered": 1, 00:18:18.949 "num_base_bdevs_operational": 1, 00:18:18.949 "base_bdevs_list": [ 00:18:18.949 { 00:18:18.949 "name": null, 00:18:18.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.949 "is_configured": false, 00:18:18.949 "data_offset": 256, 00:18:18.949 "data_size": 7936 00:18:18.949 }, 00:18:18.949 { 00:18:18.949 "name": "BaseBdev2", 00:18:18.949 "uuid": "0b734864-5b16-405c-b4ae-735f634af6a0", 00:18:18.949 "is_configured": true, 00:18:18.949 "data_offset": 256, 00:18:18.949 "data_size": 7936 00:18:18.949 } 00:18:18.949 ] 00:18:18.949 }' 00:18:18.949 17:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:18.950 17:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:18.950 17:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:18.950 17:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:18.950 17:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:19.207 [2024-07-15 17:37:14.910573] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:19.207 [2024-07-15 17:37:14.910895] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2d4356897e20 00:18:19.207 [2024-07-15 17:37:14.911955] bdev_raid.c:2825:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:19.207 17:37:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:18:20.141 17:37:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:20.141 17:37:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:20.141 17:37:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:20.141 17:37:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:20.141 17:37:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:20.141 17:37:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.141 17:37:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.413 17:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:20.413 "name": "raid_bdev1", 00:18:20.413 "uuid": "dccc535f-42d0-11ef-96ac-773515fba644", 00:18:20.413 "strip_size_kb": 0, 00:18:20.413 "state": "online", 00:18:20.413 "raid_level": "raid1", 00:18:20.413 "superblock": true, 00:18:20.413 "num_base_bdevs": 2, 00:18:20.413 "num_base_bdevs_discovered": 2, 00:18:20.413 "num_base_bdevs_operational": 2, 00:18:20.413 "process": { 00:18:20.413 "type": "rebuild", 00:18:20.413 "target": "spare", 00:18:20.413 "progress": { 00:18:20.413 "blocks": 3072, 00:18:20.413 "percent": 38 00:18:20.413 } 00:18:20.413 }, 00:18:20.413 "base_bdevs_list": [ 00:18:20.413 { 00:18:20.413 "name": "spare", 00:18:20.413 "uuid": "5e81f82d-eb10-1d59-8848-6136b24c0486", 00:18:20.413 "is_configured": true, 00:18:20.413 "data_offset": 256, 00:18:20.413 "data_size": 7936 00:18:20.413 }, 00:18:20.413 { 00:18:20.413 "name": "BaseBdev2", 00:18:20.413 "uuid": "0b734864-5b16-405c-b4ae-735f634af6a0", 00:18:20.413 "is_configured": true, 00:18:20.413 "data_offset": 256, 00:18:20.413 "data_size": 7936 00:18:20.413 } 00:18:20.413 ] 00:18:20.413 }' 00:18:20.413 17:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:20.413 17:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:20.413 17:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:20.413 17:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:20.413 17:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:18:20.413 17:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:18:20.413 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:18:20.413 17:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:18:20.413 17:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:18:20.413 17:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:18:20.413 17:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@705 -- # local timeout=720 00:18:20.413 17:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:18:20.413 17:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:20.413 17:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:20.413 17:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:20.413 17:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:20.413 17:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:20.413 17:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:20.413 17:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.673 17:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:20.673 "name": "raid_bdev1", 00:18:20.673 "uuid": "dccc535f-42d0-11ef-96ac-773515fba644", 00:18:20.673 "strip_size_kb": 0, 00:18:20.673 "state": "online", 00:18:20.673 "raid_level": "raid1", 00:18:20.673 "superblock": true, 00:18:20.673 "num_base_bdevs": 2, 00:18:20.673 "num_base_bdevs_discovered": 2, 00:18:20.673 "num_base_bdevs_operational": 2, 00:18:20.673 "process": { 00:18:20.673 "type": "rebuild", 00:18:20.673 "target": "spare", 00:18:20.674 "progress": { 00:18:20.674 "blocks": 3840, 00:18:20.674 "percent": 48 00:18:20.674 } 00:18:20.674 }, 00:18:20.674 "base_bdevs_list": [ 00:18:20.674 { 00:18:20.674 "name": "spare", 00:18:20.674 "uuid": "5e81f82d-eb10-1d59-8848-6136b24c0486", 00:18:20.674 "is_configured": true, 00:18:20.674 "data_offset": 256, 00:18:20.674 "data_size": 7936 00:18:20.674 }, 00:18:20.674 { 00:18:20.674 "name": "BaseBdev2", 00:18:20.674 "uuid": "0b734864-5b16-405c-b4ae-735f634af6a0", 00:18:20.674 "is_configured": true, 00:18:20.674 "data_offset": 256, 00:18:20.674 "data_size": 7936 00:18:20.674 } 00:18:20.674 ] 00:18:20.674 }' 00:18:20.674 17:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:20.674 17:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:20.674 17:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:20.674 17:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:20.674 17:37:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:18:22.067 17:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:18:22.067 17:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:22.067 17:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:22.067 17:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:22.067 17:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:22.067 17:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:22.067 17:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:22.067 17:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.067 17:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:22.067 "name": "raid_bdev1", 00:18:22.067 "uuid": "dccc535f-42d0-11ef-96ac-773515fba644", 00:18:22.067 "strip_size_kb": 0, 00:18:22.067 "state": "online", 00:18:22.067 "raid_level": "raid1", 00:18:22.067 "superblock": true, 00:18:22.067 "num_base_bdevs": 2, 00:18:22.067 "num_base_bdevs_discovered": 2, 00:18:22.067 "num_base_bdevs_operational": 2, 00:18:22.067 "process": { 00:18:22.067 "type": "rebuild", 00:18:22.067 "target": "spare", 00:18:22.067 "progress": { 00:18:22.067 "blocks": 7168, 00:18:22.067 "percent": 90 00:18:22.067 } 00:18:22.067 }, 00:18:22.067 "base_bdevs_list": [ 00:18:22.067 { 00:18:22.067 "name": "spare", 00:18:22.067 "uuid": "5e81f82d-eb10-1d59-8848-6136b24c0486", 00:18:22.067 "is_configured": true, 00:18:22.067 "data_offset": 256, 00:18:22.067 "data_size": 7936 00:18:22.067 }, 00:18:22.067 { 00:18:22.067 "name": "BaseBdev2", 00:18:22.067 "uuid": "0b734864-5b16-405c-b4ae-735f634af6a0", 00:18:22.067 "is_configured": true, 00:18:22.067 "data_offset": 256, 00:18:22.067 "data_size": 7936 00:18:22.067 } 00:18:22.067 ] 00:18:22.067 }' 00:18:22.067 17:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:22.067 17:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:22.067 17:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:22.067 17:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:22.067 17:37:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:18:22.325 [2024-07-15 17:37:18.030982] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:22.325 [2024-07-15 17:37:18.031033] bdev_raid.c:2506:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:22.325 [2024-07-15 17:37:18.031102] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.257 17:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:18:23.257 17:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:23.257 17:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:23.257 17:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:23.257 17:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:23.257 17:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:23.257 17:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.257 17:37:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.515 17:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:23.515 "name": "raid_bdev1", 00:18:23.515 "uuid": "dccc535f-42d0-11ef-96ac-773515fba644", 00:18:23.515 "strip_size_kb": 0, 00:18:23.515 "state": "online", 00:18:23.515 "raid_level": "raid1", 00:18:23.515 "superblock": true, 00:18:23.515 "num_base_bdevs": 2, 00:18:23.515 "num_base_bdevs_discovered": 2, 00:18:23.515 "num_base_bdevs_operational": 2, 00:18:23.515 "base_bdevs_list": [ 00:18:23.515 { 00:18:23.515 "name": "spare", 00:18:23.515 "uuid": "5e81f82d-eb10-1d59-8848-6136b24c0486", 00:18:23.515 "is_configured": true, 00:18:23.515 "data_offset": 256, 00:18:23.515 "data_size": 7936 00:18:23.515 }, 00:18:23.515 { 00:18:23.515 "name": "BaseBdev2", 00:18:23.515 "uuid": "0b734864-5b16-405c-b4ae-735f634af6a0", 00:18:23.515 "is_configured": true, 00:18:23.515 "data_offset": 256, 00:18:23.515 "data_size": 7936 00:18:23.515 } 00:18:23.515 ] 00:18:23.515 }' 00:18:23.515 17:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:23.515 17:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:23.515 17:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:23.515 17:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:18:23.515 17:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # break 00:18:23.515 17:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:23.515 17:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:23.515 17:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:23.515 17:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:23.516 17:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:23.516 17:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.516 17:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.773 17:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:23.773 "name": "raid_bdev1", 00:18:23.773 "uuid": "dccc535f-42d0-11ef-96ac-773515fba644", 00:18:23.773 "strip_size_kb": 0, 00:18:23.773 "state": "online", 00:18:23.773 "raid_level": "raid1", 00:18:23.773 "superblock": true, 00:18:23.773 "num_base_bdevs": 2, 00:18:23.773 "num_base_bdevs_discovered": 2, 00:18:23.773 "num_base_bdevs_operational": 2, 00:18:23.773 "base_bdevs_list": [ 00:18:23.773 { 00:18:23.773 "name": "spare", 00:18:23.773 "uuid": "5e81f82d-eb10-1d59-8848-6136b24c0486", 00:18:23.773 "is_configured": true, 00:18:23.773 "data_offset": 256, 00:18:23.773 "data_size": 7936 00:18:23.773 }, 00:18:23.773 { 00:18:23.773 "name": "BaseBdev2", 00:18:23.773 "uuid": "0b734864-5b16-405c-b4ae-735f634af6a0", 00:18:23.773 "is_configured": true, 00:18:23.773 "data_offset": 256, 00:18:23.773 "data_size": 7936 00:18:23.773 } 00:18:23.773 ] 00:18:23.773 }' 00:18:23.773 17:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:23.773 17:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:23.773 17:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:23.773 17:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:23.773 17:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:23.773 17:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:23.773 17:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:23.773 17:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:23.773 17:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:23.773 17:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:23.773 17:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:23.773 17:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:23.773 17:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:23.773 17:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:23.773 17:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:23.774 17:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.031 17:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:24.031 "name": "raid_bdev1", 00:18:24.031 "uuid": "dccc535f-42d0-11ef-96ac-773515fba644", 00:18:24.031 "strip_size_kb": 0, 00:18:24.031 "state": "online", 00:18:24.031 "raid_level": "raid1", 00:18:24.031 "superblock": true, 00:18:24.031 "num_base_bdevs": 2, 00:18:24.031 "num_base_bdevs_discovered": 2, 00:18:24.031 "num_base_bdevs_operational": 2, 00:18:24.031 "base_bdevs_list": [ 00:18:24.031 { 00:18:24.031 "name": "spare", 00:18:24.031 "uuid": "5e81f82d-eb10-1d59-8848-6136b24c0486", 00:18:24.031 "is_configured": true, 00:18:24.031 "data_offset": 256, 00:18:24.031 "data_size": 7936 00:18:24.031 }, 00:18:24.031 { 00:18:24.031 "name": "BaseBdev2", 00:18:24.031 "uuid": "0b734864-5b16-405c-b4ae-735f634af6a0", 00:18:24.031 "is_configured": true, 00:18:24.031 "data_offset": 256, 00:18:24.031 "data_size": 7936 00:18:24.031 } 00:18:24.031 ] 00:18:24.031 }' 00:18:24.031 17:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:24.031 17:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:24.289 17:37:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:24.546 [2024-07-15 17:37:20.191139] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:24.546 [2024-07-15 17:37:20.191174] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:24.546 [2024-07-15 17:37:20.191204] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:24.546 [2024-07-15 17:37:20.191223] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:24.546 [2024-07-15 17:37:20.191228] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2d4356835680 name raid_bdev1, state offline 00:18:24.546 17:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # jq length 00:18:24.546 17:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:24.819 17:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:18:24.819 17:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # '[' false = true ']' 00:18:24.819 17:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:18:24.819 17:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:18:25.077 17:37:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:25.335 [2024-07-15 17:37:21.063178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:25.335 [2024-07-15 17:37:21.063260] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:25.335 [2024-07-15 17:37:21.063301] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2d4356835400 00:18:25.335 [2024-07-15 17:37:21.063311] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:25.335 [2024-07-15 17:37:21.064130] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:25.335 [2024-07-15 17:37:21.064152] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:25.335 [2024-07-15 17:37:21.064187] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:25.335 [2024-07-15 17:37:21.064203] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:25.335 [2024-07-15 17:37:21.064230] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:25.335 spare 00:18:25.335 17:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:25.335 17:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:25.335 17:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:25.335 17:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:25.335 17:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:25.335 17:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:25.335 17:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:25.335 17:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:25.335 17:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:25.335 17:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:25.335 17:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.335 17:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:25.335 [2024-07-15 17:37:21.164237] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2d4356835680 00:18:25.335 [2024-07-15 17:37:21.164281] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:25.335 [2024-07-15 17:37:21.164349] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2d4356897e20 00:18:25.335 [2024-07-15 17:37:21.164387] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2d4356835680 00:18:25.335 [2024-07-15 17:37:21.164391] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2d4356835680 00:18:25.335 [2024-07-15 17:37:21.164411] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.593 17:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:25.593 "name": "raid_bdev1", 00:18:25.593 "uuid": "dccc535f-42d0-11ef-96ac-773515fba644", 00:18:25.593 "strip_size_kb": 0, 00:18:25.593 "state": "online", 00:18:25.593 "raid_level": "raid1", 00:18:25.593 "superblock": true, 00:18:25.593 "num_base_bdevs": 2, 00:18:25.593 "num_base_bdevs_discovered": 2, 00:18:25.593 "num_base_bdevs_operational": 2, 00:18:25.593 "base_bdevs_list": [ 00:18:25.593 { 00:18:25.593 "name": "spare", 00:18:25.593 "uuid": "5e81f82d-eb10-1d59-8848-6136b24c0486", 00:18:25.593 "is_configured": true, 00:18:25.593 "data_offset": 256, 00:18:25.593 "data_size": 7936 00:18:25.593 }, 00:18:25.593 { 00:18:25.593 "name": "BaseBdev2", 00:18:25.593 "uuid": "0b734864-5b16-405c-b4ae-735f634af6a0", 00:18:25.593 "is_configured": true, 00:18:25.593 "data_offset": 256, 00:18:25.593 "data_size": 7936 00:18:25.593 } 00:18:25.593 ] 00:18:25.593 }' 00:18:25.593 17:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:25.593 17:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:26.161 17:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:26.161 17:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:26.161 17:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:26.161 17:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:26.161 17:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:26.161 17:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.161 17:37:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:26.419 17:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:26.419 "name": "raid_bdev1", 00:18:26.419 "uuid": "dccc535f-42d0-11ef-96ac-773515fba644", 00:18:26.419 "strip_size_kb": 0, 00:18:26.419 "state": "online", 00:18:26.419 "raid_level": "raid1", 00:18:26.419 "superblock": true, 00:18:26.419 "num_base_bdevs": 2, 00:18:26.419 "num_base_bdevs_discovered": 2, 00:18:26.419 "num_base_bdevs_operational": 2, 00:18:26.419 "base_bdevs_list": [ 00:18:26.419 { 00:18:26.419 "name": "spare", 00:18:26.419 "uuid": "5e81f82d-eb10-1d59-8848-6136b24c0486", 00:18:26.419 "is_configured": true, 00:18:26.419 "data_offset": 256, 00:18:26.419 "data_size": 7936 00:18:26.419 }, 00:18:26.419 { 00:18:26.419 "name": "BaseBdev2", 00:18:26.419 "uuid": "0b734864-5b16-405c-b4ae-735f634af6a0", 00:18:26.419 "is_configured": true, 00:18:26.419 "data_offset": 256, 00:18:26.419 "data_size": 7936 00:18:26.419 } 00:18:26.419 ] 00:18:26.419 }' 00:18:26.419 17:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:26.419 17:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:26.419 17:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:26.419 17:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:26.419 17:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:26.419 17:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.678 17:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:18:26.679 17:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:18:26.937 [2024-07-15 17:37:22.579205] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:26.937 17:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:26.937 17:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:26.937 17:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:26.937 17:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:26.937 17:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:26.937 17:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:26.937 17:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:26.937 17:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:26.937 17:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:26.937 17:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:26.937 17:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:26.937 17:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.225 17:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:27.225 "name": "raid_bdev1", 00:18:27.225 "uuid": "dccc535f-42d0-11ef-96ac-773515fba644", 00:18:27.225 "strip_size_kb": 0, 00:18:27.225 "state": "online", 00:18:27.225 "raid_level": "raid1", 00:18:27.225 "superblock": true, 00:18:27.225 "num_base_bdevs": 2, 00:18:27.225 "num_base_bdevs_discovered": 1, 00:18:27.225 "num_base_bdevs_operational": 1, 00:18:27.225 "base_bdevs_list": [ 00:18:27.225 { 00:18:27.225 "name": null, 00:18:27.225 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:27.225 "is_configured": false, 00:18:27.225 "data_offset": 256, 00:18:27.225 "data_size": 7936 00:18:27.225 }, 00:18:27.225 { 00:18:27.225 "name": "BaseBdev2", 00:18:27.225 "uuid": "0b734864-5b16-405c-b4ae-735f634af6a0", 00:18:27.225 "is_configured": true, 00:18:27.225 "data_offset": 256, 00:18:27.225 "data_size": 7936 00:18:27.225 } 00:18:27.225 ] 00:18:27.225 }' 00:18:27.225 17:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:27.225 17:37:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:27.484 17:37:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:27.741 [2024-07-15 17:37:23.463230] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:27.741 [2024-07-15 17:37:23.463306] bdev_raid.c:3564:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:27.741 [2024-07-15 17:37:23.463313] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:27.741 [2024-07-15 17:37:23.463352] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:27.741 [2024-07-15 17:37:23.463541] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2d4356897ec0 00:18:27.741 [2024-07-15 17:37:23.464092] bdev_raid.c:2825:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:27.741 17:37:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # sleep 1 00:18:29.116 17:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:29.116 17:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:29.116 17:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:29.116 17:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:29.116 17:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:29.116 17:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.116 17:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.116 17:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:29.116 "name": "raid_bdev1", 00:18:29.116 "uuid": "dccc535f-42d0-11ef-96ac-773515fba644", 00:18:29.116 "strip_size_kb": 0, 00:18:29.116 "state": "online", 00:18:29.116 "raid_level": "raid1", 00:18:29.116 "superblock": true, 00:18:29.116 "num_base_bdevs": 2, 00:18:29.116 "num_base_bdevs_discovered": 2, 00:18:29.116 "num_base_bdevs_operational": 2, 00:18:29.116 "process": { 00:18:29.116 "type": "rebuild", 00:18:29.116 "target": "spare", 00:18:29.116 "progress": { 00:18:29.116 "blocks": 3328, 00:18:29.116 "percent": 41 00:18:29.116 } 00:18:29.116 }, 00:18:29.116 "base_bdevs_list": [ 00:18:29.116 { 00:18:29.116 "name": "spare", 00:18:29.116 "uuid": "5e81f82d-eb10-1d59-8848-6136b24c0486", 00:18:29.116 "is_configured": true, 00:18:29.116 "data_offset": 256, 00:18:29.116 "data_size": 7936 00:18:29.116 }, 00:18:29.116 { 00:18:29.116 "name": "BaseBdev2", 00:18:29.116 "uuid": "0b734864-5b16-405c-b4ae-735f634af6a0", 00:18:29.116 "is_configured": true, 00:18:29.116 "data_offset": 256, 00:18:29.116 "data_size": 7936 00:18:29.116 } 00:18:29.116 ] 00:18:29.116 }' 00:18:29.116 17:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:29.116 17:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:29.116 17:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:29.116 17:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:29.116 17:37:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:18:29.376 [2024-07-15 17:37:25.031514] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:29.376 [2024-07-15 17:37:25.071445] bdev_raid.c:2516:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: Operation not supported by device 00:18:29.376 [2024-07-15 17:37:25.071509] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.376 [2024-07-15 17:37:25.071516] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:29.376 [2024-07-15 17:37:25.071520] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: Operation not supported by device 00:18:29.376 17:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:29.376 17:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:29.376 17:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:29.376 17:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:29.376 17:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:29.376 17:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:29.376 17:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:29.376 17:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:29.376 17:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:29.376 17:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:29.376 17:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:29.376 17:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.635 17:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:29.635 "name": "raid_bdev1", 00:18:29.635 "uuid": "dccc535f-42d0-11ef-96ac-773515fba644", 00:18:29.635 "strip_size_kb": 0, 00:18:29.635 "state": "online", 00:18:29.635 "raid_level": "raid1", 00:18:29.635 "superblock": true, 00:18:29.635 "num_base_bdevs": 2, 00:18:29.635 "num_base_bdevs_discovered": 1, 00:18:29.635 "num_base_bdevs_operational": 1, 00:18:29.635 "base_bdevs_list": [ 00:18:29.635 { 00:18:29.635 "name": null, 00:18:29.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:29.635 "is_configured": false, 00:18:29.635 "data_offset": 256, 00:18:29.635 "data_size": 7936 00:18:29.635 }, 00:18:29.635 { 00:18:29.635 "name": "BaseBdev2", 00:18:29.635 "uuid": "0b734864-5b16-405c-b4ae-735f634af6a0", 00:18:29.635 "is_configured": true, 00:18:29.635 "data_offset": 256, 00:18:29.635 "data_size": 7936 00:18:29.635 } 00:18:29.635 ] 00:18:29.635 }' 00:18:29.635 17:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:29.635 17:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:29.893 17:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:30.151 [2024-07-15 17:37:25.943560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:30.151 [2024-07-15 17:37:25.943621] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:30.151 [2024-07-15 17:37:25.943680] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2d4356835400 00:18:30.151 [2024-07-15 17:37:25.943690] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:30.151 [2024-07-15 17:37:25.943757] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:30.151 [2024-07-15 17:37:25.943767] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:30.151 [2024-07-15 17:37:25.943788] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:30.151 [2024-07-15 17:37:25.943793] bdev_raid.c:3564:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:30.151 [2024-07-15 17:37:25.943797] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:30.151 [2024-07-15 17:37:25.943809] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:30.151 [2024-07-15 17:37:25.943985] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2d4356897e20 00:18:30.151 [2024-07-15 17:37:25.944533] bdev_raid.c:2825:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:30.151 spare 00:18:30.151 17:37:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # sleep 1 00:18:31.541 17:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:31.541 17:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:31.541 17:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:31.541 17:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:31.541 17:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:31.541 17:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:31.541 17:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:31.541 17:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:31.541 "name": "raid_bdev1", 00:18:31.541 "uuid": "dccc535f-42d0-11ef-96ac-773515fba644", 00:18:31.541 "strip_size_kb": 0, 00:18:31.541 "state": "online", 00:18:31.541 "raid_level": "raid1", 00:18:31.541 "superblock": true, 00:18:31.541 "num_base_bdevs": 2, 00:18:31.541 "num_base_bdevs_discovered": 2, 00:18:31.541 "num_base_bdevs_operational": 2, 00:18:31.541 "process": { 00:18:31.541 "type": "rebuild", 00:18:31.541 "target": "spare", 00:18:31.541 "progress": { 00:18:31.541 "blocks": 3328, 00:18:31.541 "percent": 41 00:18:31.541 } 00:18:31.541 }, 00:18:31.541 "base_bdevs_list": [ 00:18:31.541 { 00:18:31.541 "name": "spare", 00:18:31.541 "uuid": "5e81f82d-eb10-1d59-8848-6136b24c0486", 00:18:31.541 "is_configured": true, 00:18:31.541 "data_offset": 256, 00:18:31.541 "data_size": 7936 00:18:31.541 }, 00:18:31.541 { 00:18:31.541 "name": "BaseBdev2", 00:18:31.541 "uuid": "0b734864-5b16-405c-b4ae-735f634af6a0", 00:18:31.541 "is_configured": true, 00:18:31.541 "data_offset": 256, 00:18:31.541 "data_size": 7936 00:18:31.541 } 00:18:31.541 ] 00:18:31.541 }' 00:18:31.541 17:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:31.541 17:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:31.541 17:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:31.541 17:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:31.541 17:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:18:31.799 [2024-07-15 17:37:27.576266] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:32.058 [2024-07-15 17:37:27.652195] bdev_raid.c:2516:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: Operation not supported by device 00:18:32.058 [2024-07-15 17:37:27.652261] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:32.058 [2024-07-15 17:37:27.652268] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:32.058 [2024-07-15 17:37:27.652273] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: Operation not supported by device 00:18:32.058 17:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:32.058 17:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:32.058 17:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:32.058 17:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:32.058 17:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:32.058 17:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:32.058 17:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:32.058 17:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:32.058 17:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:32.058 17:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:32.058 17:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.058 17:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.316 17:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:32.316 "name": "raid_bdev1", 00:18:32.316 "uuid": "dccc535f-42d0-11ef-96ac-773515fba644", 00:18:32.316 "strip_size_kb": 0, 00:18:32.316 "state": "online", 00:18:32.316 "raid_level": "raid1", 00:18:32.316 "superblock": true, 00:18:32.316 "num_base_bdevs": 2, 00:18:32.316 "num_base_bdevs_discovered": 1, 00:18:32.316 "num_base_bdevs_operational": 1, 00:18:32.316 "base_bdevs_list": [ 00:18:32.316 { 00:18:32.316 "name": null, 00:18:32.316 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.316 "is_configured": false, 00:18:32.316 "data_offset": 256, 00:18:32.316 "data_size": 7936 00:18:32.316 }, 00:18:32.316 { 00:18:32.317 "name": "BaseBdev2", 00:18:32.317 "uuid": "0b734864-5b16-405c-b4ae-735f634af6a0", 00:18:32.317 "is_configured": true, 00:18:32.317 "data_offset": 256, 00:18:32.317 "data_size": 7936 00:18:32.317 } 00:18:32.317 ] 00:18:32.317 }' 00:18:32.317 17:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:32.317 17:37:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:32.575 17:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:32.575 17:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:32.575 17:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:32.575 17:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:32.575 17:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:32.575 17:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:32.575 17:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.833 17:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:32.833 "name": "raid_bdev1", 00:18:32.833 "uuid": "dccc535f-42d0-11ef-96ac-773515fba644", 00:18:32.833 "strip_size_kb": 0, 00:18:32.833 "state": "online", 00:18:32.833 "raid_level": "raid1", 00:18:32.833 "superblock": true, 00:18:32.833 "num_base_bdevs": 2, 00:18:32.833 "num_base_bdevs_discovered": 1, 00:18:32.833 "num_base_bdevs_operational": 1, 00:18:32.833 "base_bdevs_list": [ 00:18:32.833 { 00:18:32.833 "name": null, 00:18:32.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.833 "is_configured": false, 00:18:32.833 "data_offset": 256, 00:18:32.833 "data_size": 7936 00:18:32.833 }, 00:18:32.833 { 00:18:32.833 "name": "BaseBdev2", 00:18:32.833 "uuid": "0b734864-5b16-405c-b4ae-735f634af6a0", 00:18:32.833 "is_configured": true, 00:18:32.833 "data_offset": 256, 00:18:32.833 "data_size": 7936 00:18:32.833 } 00:18:32.833 ] 00:18:32.833 }' 00:18:32.833 17:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:32.833 17:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:32.833 17:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:32.833 17:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:32.833 17:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:18:33.091 17:37:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:33.349 [2024-07-15 17:37:29.080327] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:33.349 [2024-07-15 17:37:29.080389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:33.349 [2024-07-15 17:37:29.080420] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2d4356834780 00:18:33.349 [2024-07-15 17:37:29.080429] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:33.349 [2024-07-15 17:37:29.080501] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:33.349 [2024-07-15 17:37:29.080517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:33.349 [2024-07-15 17:37:29.080544] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:33.349 [2024-07-15 17:37:29.080551] bdev_raid.c:3564:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:33.349 [2024-07-15 17:37:29.080555] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:33.349 BaseBdev1 00:18:33.349 17:37:29 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # sleep 1 00:18:34.724 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:34.724 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:34.724 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:34.724 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:34.724 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:34.724 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:34.724 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:34.724 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:34.724 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:34.724 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:34.724 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.724 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.724 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:34.724 "name": "raid_bdev1", 00:18:34.724 "uuid": "dccc535f-42d0-11ef-96ac-773515fba644", 00:18:34.724 "strip_size_kb": 0, 00:18:34.724 "state": "online", 00:18:34.724 "raid_level": "raid1", 00:18:34.724 "superblock": true, 00:18:34.724 "num_base_bdevs": 2, 00:18:34.724 "num_base_bdevs_discovered": 1, 00:18:34.724 "num_base_bdevs_operational": 1, 00:18:34.724 "base_bdevs_list": [ 00:18:34.724 { 00:18:34.724 "name": null, 00:18:34.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.724 "is_configured": false, 00:18:34.724 "data_offset": 256, 00:18:34.724 "data_size": 7936 00:18:34.724 }, 00:18:34.724 { 00:18:34.724 "name": "BaseBdev2", 00:18:34.724 "uuid": "0b734864-5b16-405c-b4ae-735f634af6a0", 00:18:34.724 "is_configured": true, 00:18:34.724 "data_offset": 256, 00:18:34.724 "data_size": 7936 00:18:34.724 } 00:18:34.724 ] 00:18:34.724 }' 00:18:34.724 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:34.724 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:34.982 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:34.982 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:34.982 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:34.982 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:34.982 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:34.982 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:34.982 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.240 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:35.240 "name": "raid_bdev1", 00:18:35.240 "uuid": "dccc535f-42d0-11ef-96ac-773515fba644", 00:18:35.240 "strip_size_kb": 0, 00:18:35.240 "state": "online", 00:18:35.240 "raid_level": "raid1", 00:18:35.240 "superblock": true, 00:18:35.240 "num_base_bdevs": 2, 00:18:35.240 "num_base_bdevs_discovered": 1, 00:18:35.240 "num_base_bdevs_operational": 1, 00:18:35.240 "base_bdevs_list": [ 00:18:35.240 { 00:18:35.240 "name": null, 00:18:35.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.240 "is_configured": false, 00:18:35.240 "data_offset": 256, 00:18:35.240 "data_size": 7936 00:18:35.240 }, 00:18:35.240 { 00:18:35.240 "name": "BaseBdev2", 00:18:35.240 "uuid": "0b734864-5b16-405c-b4ae-735f634af6a0", 00:18:35.240 "is_configured": true, 00:18:35.240 "data_offset": 256, 00:18:35.240 "data_size": 7936 00:18:35.240 } 00:18:35.240 ] 00:18:35.240 }' 00:18:35.240 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:35.240 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:35.240 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:35.240 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:35.240 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:35.240 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:18:35.240 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:35.240 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:35.240 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:35.240 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:35.240 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:35.240 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:35.240 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:35.240 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:35.240 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:35.240 17:37:30 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:35.498 [2024-07-15 17:37:31.244450] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:35.498 [2024-07-15 17:37:31.244517] bdev_raid.c:3564:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:35.498 [2024-07-15 17:37:31.244523] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:35.498 request: 00:18:35.498 { 00:18:35.498 "base_bdev": "BaseBdev1", 00:18:35.498 "raid_bdev": "raid_bdev1", 00:18:35.498 "method": "bdev_raid_add_base_bdev", 00:18:35.498 "req_id": 1 00:18:35.498 } 00:18:35.498 Got JSON-RPC error response 00:18:35.498 response: 00:18:35.498 { 00:18:35.498 "code": -22, 00:18:35.498 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:35.498 } 00:18:35.498 17:37:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:18:35.498 17:37:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:35.499 17:37:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:35.499 17:37:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:35.499 17:37:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # sleep 1 00:18:36.873 17:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:36.873 17:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:36.873 17:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:36.873 17:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:36.873 17:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:36.873 17:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:36.873 17:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:36.873 17:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:36.873 17:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:36.873 17:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:36.873 17:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:36.873 17:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.873 17:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:36.873 "name": "raid_bdev1", 00:18:36.873 "uuid": "dccc535f-42d0-11ef-96ac-773515fba644", 00:18:36.873 "strip_size_kb": 0, 00:18:36.873 "state": "online", 00:18:36.873 "raid_level": "raid1", 00:18:36.873 "superblock": true, 00:18:36.873 "num_base_bdevs": 2, 00:18:36.873 "num_base_bdevs_discovered": 1, 00:18:36.873 "num_base_bdevs_operational": 1, 00:18:36.873 "base_bdevs_list": [ 00:18:36.873 { 00:18:36.873 "name": null, 00:18:36.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:36.873 "is_configured": false, 00:18:36.873 "data_offset": 256, 00:18:36.873 "data_size": 7936 00:18:36.873 }, 00:18:36.873 { 00:18:36.873 "name": "BaseBdev2", 00:18:36.873 "uuid": "0b734864-5b16-405c-b4ae-735f634af6a0", 00:18:36.873 "is_configured": true, 00:18:36.873 "data_offset": 256, 00:18:36.873 "data_size": 7936 00:18:36.873 } 00:18:36.873 ] 00:18:36.873 }' 00:18:36.873 17:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:36.873 17:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.441 17:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:37.441 17:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:37.441 17:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:37.441 17:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:37.441 17:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:37.441 17:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:37.441 17:37:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.441 17:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:37.441 "name": "raid_bdev1", 00:18:37.441 "uuid": "dccc535f-42d0-11ef-96ac-773515fba644", 00:18:37.441 "strip_size_kb": 0, 00:18:37.441 "state": "online", 00:18:37.441 "raid_level": "raid1", 00:18:37.441 "superblock": true, 00:18:37.441 "num_base_bdevs": 2, 00:18:37.441 "num_base_bdevs_discovered": 1, 00:18:37.441 "num_base_bdevs_operational": 1, 00:18:37.441 "base_bdevs_list": [ 00:18:37.441 { 00:18:37.441 "name": null, 00:18:37.441 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.441 "is_configured": false, 00:18:37.441 "data_offset": 256, 00:18:37.441 "data_size": 7936 00:18:37.441 }, 00:18:37.441 { 00:18:37.441 "name": "BaseBdev2", 00:18:37.441 "uuid": "0b734864-5b16-405c-b4ae-735f634af6a0", 00:18:37.441 "is_configured": true, 00:18:37.441 "data_offset": 256, 00:18:37.441 "data_size": 7936 00:18:37.441 } 00:18:37.441 ] 00:18:37.441 }' 00:18:37.441 17:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:37.699 17:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:37.699 17:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:37.699 17:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:37.699 17:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # killprocess 67533 00:18:37.699 17:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 67533 ']' 00:18:37.699 17:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 67533 00:18:37.699 17:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:18:37.699 17:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:18:37.699 17:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # tail -1 00:18:37.699 17:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps -c -o command 67533 00:18:37.699 17:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:18:37.699 killing process with pid 67533 00:18:37.699 17:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:18:37.699 17:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67533' 00:18:37.699 Received shutdown signal, test time was about 60.000000 seconds 00:18:37.699 00:18:37.699 Latency(us) 00:18:37.699 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.699 =================================================================================================================== 00:18:37.699 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:37.699 17:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 67533 00:18:37.699 [2024-07-15 17:37:33.290327] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:37.699 17:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 67533 00:18:37.699 [2024-07-15 17:37:33.290382] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:37.699 [2024-07-15 17:37:33.290403] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:37.699 [2024-07-15 17:37:33.290412] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2d4356835680 name raid_bdev1, state offline 00:18:37.699 [2024-07-15 17:37:33.308950] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:37.699 17:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # return 0 00:18:37.699 ************************************ 00:18:37.699 END TEST raid_rebuild_test_sb_md_interleaved 00:18:37.699 ************************************ 00:18:37.699 00:18:37.699 real 0m27.076s 00:18:37.699 user 0m42.069s 00:18:37.699 sys 0m2.728s 00:18:37.699 17:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:37.699 17:37:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:37.699 17:37:33 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:37.699 17:37:33 bdev_raid -- bdev/bdev_raid.sh@916 -- # trap - EXIT 00:18:37.699 17:37:33 bdev_raid -- bdev/bdev_raid.sh@917 -- # cleanup 00:18:37.699 17:37:33 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 67533 ']' 00:18:37.699 17:37:33 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 67533 00:18:37.699 17:37:33 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:18:37.699 00:18:37.699 real 11m47.523s 00:18:37.699 user 20m38.320s 00:18:37.699 sys 1m45.961s 00:18:37.699 17:37:33 bdev_raid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:37.957 17:37:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:37.957 ************************************ 00:18:37.957 END TEST bdev_raid 00:18:37.957 ************************************ 00:18:37.957 17:37:33 -- common/autotest_common.sh@1142 -- # return 0 00:18:37.957 17:37:33 -- spdk/autotest.sh@191 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:18:37.957 17:37:33 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:37.957 17:37:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:37.957 17:37:33 -- common/autotest_common.sh@10 -- # set +x 00:18:37.957 ************************************ 00:18:37.957 START TEST bdevperf_config 00:18:37.957 ************************************ 00:18:37.957 17:37:33 bdevperf_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:18:37.957 * Looking for test storage... 00:18:37.957 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:18:37.957 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:18:37.957 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:18:37.957 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:18:37.957 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:18:37.957 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:37.957 17:37:33 bdevperf_config -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:41.242 17:37:36 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-15 17:37:33.742196] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:18:41.242 [2024-07-15 17:37:33.742429] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:41.242 Using job config with 4 jobs 00:18:41.242 EAL: TSC is not safe to use in SMP mode 00:18:41.242 EAL: TSC is not invariant 00:18:41.242 [2024-07-15 17:37:34.257674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.242 [2024-07-15 17:37:34.337971] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:41.242 [2024-07-15 17:37:34.340147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.242 cpumask for '\''job0'\'' is too big 00:18:41.242 cpumask for '\''job1'\'' is too big 00:18:41.242 cpumask for '\''job2'\'' is too big 00:18:41.242 cpumask for '\''job3'\'' is too big 00:18:41.242 Running I/O for 2 seconds... 00:18:41.242 00:18:41.242 Latency(us) 00:18:41.242 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.242 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:41.242 Malloc0 : 2.00 318725.80 311.26 0.00 0.00 802.92 223.42 1482.01 00:18:41.242 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:41.242 Malloc0 : 2.00 318713.70 311.24 0.00 0.00 802.75 191.77 1251.14 00:18:41.242 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:41.242 Malloc0 : 2.00 318695.87 311.23 0.00 0.00 802.64 194.56 1117.09 00:18:41.242 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:41.242 Malloc0 : 2.00 318772.95 311.30 0.00 0.00 802.29 70.28 1102.20 00:18:41.242 =================================================================================================================== 00:18:41.242 Total : 1274908.32 1245.03 0.00 0.00 802.65 70.28 1482.01' 00:18:41.242 17:37:36 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-15 17:37:33.742196] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:18:41.242 [2024-07-15 17:37:33.742429] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:41.242 Using job config with 4 jobs 00:18:41.242 EAL: TSC is not safe to use in SMP mode 00:18:41.242 EAL: TSC is not invariant 00:18:41.242 [2024-07-15 17:37:34.257674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.242 [2024-07-15 17:37:34.337971] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:41.242 [2024-07-15 17:37:34.340147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.242 cpumask for '\''job0'\'' is too big 00:18:41.242 cpumask for '\''job1'\'' is too big 00:18:41.242 cpumask for '\''job2'\'' is too big 00:18:41.242 cpumask for '\''job3'\'' is too big 00:18:41.242 Running I/O for 2 seconds... 00:18:41.242 00:18:41.242 Latency(us) 00:18:41.242 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.242 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:41.242 Malloc0 : 2.00 318725.80 311.26 0.00 0.00 802.92 223.42 1482.01 00:18:41.242 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:41.242 Malloc0 : 2.00 318713.70 311.24 0.00 0.00 802.75 191.77 1251.14 00:18:41.242 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:41.242 Malloc0 : 2.00 318695.87 311.23 0.00 0.00 802.64 194.56 1117.09 00:18:41.242 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:41.242 Malloc0 : 2.00 318772.95 311.30 0.00 0.00 802.29 70.28 1102.20 00:18:41.242 =================================================================================================================== 00:18:41.242 Total : 1274908.32 1245.03 0.00 0.00 802.65 70.28 1482.01' 00:18:41.242 17:37:36 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-15 17:37:33.742196] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:18:41.242 [2024-07-15 17:37:33.742429] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:41.242 Using job config with 4 jobs 00:18:41.242 EAL: TSC is not safe to use in SMP mode 00:18:41.242 EAL: TSC is not invariant 00:18:41.242 [2024-07-15 17:37:34.257674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.242 [2024-07-15 17:37:34.337971] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:41.242 [2024-07-15 17:37:34.340147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.242 cpumask for '\''job0'\'' is too big 00:18:41.242 cpumask for '\''job1'\'' is too big 00:18:41.242 cpumask for '\''job2'\'' is too big 00:18:41.242 cpumask for '\''job3'\'' is too big 00:18:41.242 Running I/O for 2 seconds... 00:18:41.242 00:18:41.242 Latency(us) 00:18:41.242 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.242 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:41.242 Malloc0 : 2.00 318725.80 311.26 0.00 0.00 802.92 223.42 1482.01 00:18:41.242 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:41.242 Malloc0 : 2.00 318713.70 311.24 0.00 0.00 802.75 191.77 1251.14 00:18:41.242 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:41.242 Malloc0 : 2.00 318695.87 311.23 0.00 0.00 802.64 194.56 1117.09 00:18:41.242 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:41.242 Malloc0 : 2.00 318772.95 311.30 0.00 0.00 802.29 70.28 1102.20 00:18:41.242 =================================================================================================================== 00:18:41.242 Total : 1274908.32 1245.03 0.00 0.00 802.65 70.28 1482.01' 00:18:41.242 17:37:36 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:18:41.242 17:37:36 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:18:41.242 17:37:36 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:18:41.242 17:37:36 bdevperf_config -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:41.242 [2024-07-15 17:37:36.587861] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:18:41.242 [2024-07-15 17:37:36.588192] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:41.501 EAL: TSC is not safe to use in SMP mode 00:18:41.501 EAL: TSC is not invariant 00:18:41.501 [2024-07-15 17:37:37.132051] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.501 [2024-07-15 17:37:37.213391] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:41.501 [2024-07-15 17:37:37.215714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.501 cpumask for 'job0' is too big 00:18:41.501 cpumask for 'job1' is too big 00:18:41.501 cpumask for 'job2' is too big 00:18:41.501 cpumask for 'job3' is too big 00:18:44.032 17:37:39 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:18:44.032 Running I/O for 2 seconds... 00:18:44.032 00:18:44.032 Latency(us) 00:18:44.032 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.032 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:44.032 Malloc0 : 2.00 323149.30 315.58 0.00 0.00 791.93 213.18 1452.22 00:18:44.032 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:44.032 Malloc0 : 2.00 323137.83 315.56 0.00 0.00 791.79 187.11 1370.30 00:18:44.032 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:44.032 Malloc0 : 2.00 323182.26 315.61 0.00 0.00 791.53 185.25 1295.83 00:18:44.032 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:44.032 Malloc0 : 2.00 323166.60 315.59 0.00 0.00 791.40 155.46 1228.80 00:18:44.032 =================================================================================================================== 00:18:44.032 Total : 1292636.00 1262.34 0.00 0.00 791.66 155.46 1452.22' 00:18:44.032 17:37:39 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:18:44.032 17:37:39 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:44.032 17:37:39 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:18:44.032 17:37:39 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:18:44.032 17:37:39 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:18:44.032 17:37:39 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:18:44.032 17:37:39 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:18:44.032 17:37:39 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:18:44.032 00:18:44.032 17:37:39 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:44.032 17:37:39 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:44.032 17:37:39 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:18:44.032 17:37:39 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:18:44.033 17:37:39 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:18:44.033 17:37:39 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:18:44.033 17:37:39 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:18:44.033 17:37:39 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:18:44.033 00:18:44.033 17:37:39 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:44.033 17:37:39 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:44.033 17:37:39 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:18:44.033 17:37:39 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:18:44.033 17:37:39 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:18:44.033 17:37:39 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:18:44.033 17:37:39 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:18:44.033 17:37:39 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:18:44.033 00:18:44.033 17:37:39 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:44.033 17:37:39 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:44.033 17:37:39 bdevperf_config -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:46.564 17:37:42 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-15 17:37:39.465567] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:18:46.564 [2024-07-15 17:37:39.465765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:46.564 Using job config with 3 jobs 00:18:46.564 EAL: TSC is not safe to use in SMP mode 00:18:46.564 EAL: TSC is not invariant 00:18:46.564 [2024-07-15 17:37:40.033068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.564 [2024-07-15 17:37:40.116873] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:46.564 [2024-07-15 17:37:40.119147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.564 cpumask for '\''job0'\'' is too big 00:18:46.564 cpumask for '\''job1'\'' is too big 00:18:46.564 cpumask for '\''job2'\'' is too big 00:18:46.564 Running I/O for 2 seconds... 00:18:46.564 00:18:46.564 Latency(us) 00:18:46.564 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.564 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:46.564 Malloc0 : 2.00 403437.86 393.98 0.00 0.00 634.28 236.45 1161.78 00:18:46.564 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:46.564 Malloc0 : 2.00 403422.75 393.97 0.00 0.00 634.16 203.87 990.49 00:18:46.564 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:46.564 Malloc0 : 2.00 403408.84 393.95 0.00 0.00 634.03 191.77 997.94 00:18:46.564 =================================================================================================================== 00:18:46.564 Total : 1210269.45 1181.90 0.00 0.00 634.16 191.77 1161.78' 00:18:46.564 17:37:42 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-15 17:37:39.465567] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:18:46.564 [2024-07-15 17:37:39.465765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:46.564 Using job config with 3 jobs 00:18:46.564 EAL: TSC is not safe to use in SMP mode 00:18:46.564 EAL: TSC is not invariant 00:18:46.564 [2024-07-15 17:37:40.033068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.564 [2024-07-15 17:37:40.116873] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:46.564 [2024-07-15 17:37:40.119147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.564 cpumask for '\''job0'\'' is too big 00:18:46.564 cpumask for '\''job1'\'' is too big 00:18:46.564 cpumask for '\''job2'\'' is too big 00:18:46.564 Running I/O for 2 seconds... 00:18:46.564 00:18:46.564 Latency(us) 00:18:46.564 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.564 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:46.564 Malloc0 : 2.00 403437.86 393.98 0.00 0.00 634.28 236.45 1161.78 00:18:46.564 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:46.564 Malloc0 : 2.00 403422.75 393.97 0.00 0.00 634.16 203.87 990.49 00:18:46.564 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:46.564 Malloc0 : 2.00 403408.84 393.95 0.00 0.00 634.03 191.77 997.94 00:18:46.564 =================================================================================================================== 00:18:46.565 Total : 1210269.45 1181.90 0.00 0.00 634.16 191.77 1161.78' 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-15 17:37:39.465567] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:18:46.565 [2024-07-15 17:37:39.465765] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:46.565 Using job config with 3 jobs 00:18:46.565 EAL: TSC is not safe to use in SMP mode 00:18:46.565 EAL: TSC is not invariant 00:18:46.565 [2024-07-15 17:37:40.033068] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.565 [2024-07-15 17:37:40.116873] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:46.565 [2024-07-15 17:37:40.119147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.565 cpumask for '\''job0'\'' is too big 00:18:46.565 cpumask for '\''job1'\'' is too big 00:18:46.565 cpumask for '\''job2'\'' is too big 00:18:46.565 Running I/O for 2 seconds... 00:18:46.565 00:18:46.565 Latency(us) 00:18:46.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.565 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:46.565 Malloc0 : 2.00 403437.86 393.98 0.00 0.00 634.28 236.45 1161.78 00:18:46.565 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:46.565 Malloc0 : 2.00 403422.75 393.97 0.00 0.00 634.16 203.87 990.49 00:18:46.565 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:46.565 Malloc0 : 2.00 403408.84 393.95 0.00 0.00 634.03 191.77 997.94 00:18:46.565 =================================================================================================================== 00:18:46.565 Total : 1210269.45 1181.90 0.00 0.00 634.16 191.77 1161.78' 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:18:46.565 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:18:46.565 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:18:46.565 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:18:46.565 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:46.565 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:46.565 17:37:42 bdevperf_config -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:49.852 17:37:45 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-15 17:37:42.385503] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:18:49.852 [2024-07-15 17:37:42.385772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:49.852 Using job config with 4 jobs 00:18:49.852 EAL: TSC is not safe to use in SMP mode 00:18:49.852 EAL: TSC is not invariant 00:18:49.852 [2024-07-15 17:37:42.910806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.852 [2024-07-15 17:37:42.994177] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:49.852 [2024-07-15 17:37:42.996349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.852 cpumask for '\''job0'\'' is too big 00:18:49.852 cpumask for '\''job1'\'' is too big 00:18:49.852 cpumask for '\''job2'\'' is too big 00:18:49.852 cpumask for '\''job3'\'' is too big 00:18:49.852 Running I/O for 2 seconds... 00:18:49.852 00:18:49.852 Latency(us) 00:18:49.852 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.852 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:49.852 Malloc0 : 2.00 154989.77 151.36 0.00 0.00 1651.41 487.80 3083.18 00:18:49.852 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:49.852 Malloc1 : 2.00 154982.88 151.35 0.00 0.00 1651.22 422.63 3083.18 00:18:49.852 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:49.852 Malloc0 : 2.00 155001.12 151.37 0.00 0.00 1650.42 437.53 2591.65 00:18:49.852 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:49.852 Malloc1 : 2.00 154992.38 151.36 0.00 0.00 1650.33 383.54 2591.65 00:18:49.852 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:49.852 Malloc0 : 2.00 154985.03 151.35 0.00 0.00 1649.89 495.24 2085.24 00:18:49.852 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:49.852 Malloc1 : 2.00 154976.44 151.34 0.00 0.00 1649.84 452.42 2040.56 00:18:49.853 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:49.853 Malloc0 : 2.00 154968.94 151.34 0.00 0.00 1649.31 424.50 2025.66 00:18:49.853 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:49.853 Malloc1 : 2.00 155068.68 151.43 0.00 0.00 1648.04 113.57 2025.66 00:18:49.853 =================================================================================================================== 00:18:49.853 Total : 1239965.23 1210.90 0.00 0.00 1650.06 113.57 3083.18' 00:18:49.853 17:37:45 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-15 17:37:42.385503] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:18:49.853 [2024-07-15 17:37:42.385772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:49.853 Using job config with 4 jobs 00:18:49.853 EAL: TSC is not safe to use in SMP mode 00:18:49.853 EAL: TSC is not invariant 00:18:49.853 [2024-07-15 17:37:42.910806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.853 [2024-07-15 17:37:42.994177] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:49.853 [2024-07-15 17:37:42.996349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.853 cpumask for '\''job0'\'' is too big 00:18:49.853 cpumask for '\''job1'\'' is too big 00:18:49.853 cpumask for '\''job2'\'' is too big 00:18:49.853 cpumask for '\''job3'\'' is too big 00:18:49.853 Running I/O for 2 seconds... 00:18:49.853 00:18:49.853 Latency(us) 00:18:49.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.853 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:49.853 Malloc0 : 2.00 154989.77 151.36 0.00 0.00 1651.41 487.80 3083.18 00:18:49.853 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:49.853 Malloc1 : 2.00 154982.88 151.35 0.00 0.00 1651.22 422.63 3083.18 00:18:49.853 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:49.853 Malloc0 : 2.00 155001.12 151.37 0.00 0.00 1650.42 437.53 2591.65 00:18:49.853 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:49.853 Malloc1 : 2.00 154992.38 151.36 0.00 0.00 1650.33 383.54 2591.65 00:18:49.853 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:49.853 Malloc0 : 2.00 154985.03 151.35 0.00 0.00 1649.89 495.24 2085.24 00:18:49.853 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:49.853 Malloc1 : 2.00 154976.44 151.34 0.00 0.00 1649.84 452.42 2040.56 00:18:49.853 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:49.853 Malloc0 : 2.00 154968.94 151.34 0.00 0.00 1649.31 424.50 2025.66 00:18:49.853 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:49.853 Malloc1 : 2.00 155068.68 151.43 0.00 0.00 1648.04 113.57 2025.66 00:18:49.853 =================================================================================================================== 00:18:49.853 Total : 1239965.23 1210.90 0.00 0.00 1650.06 113.57 3083.18' 00:18:49.853 17:37:45 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-15 17:37:42.385503] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:18:49.853 [2024-07-15 17:37:42.385772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:49.853 Using job config with 4 jobs 00:18:49.853 EAL: TSC is not safe to use in SMP mode 00:18:49.853 EAL: TSC is not invariant 00:18:49.853 [2024-07-15 17:37:42.910806] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.853 [2024-07-15 17:37:42.994177] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:49.853 [2024-07-15 17:37:42.996349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.853 cpumask for '\''job0'\'' is too big 00:18:49.853 cpumask for '\''job1'\'' is too big 00:18:49.853 cpumask for '\''job2'\'' is too big 00:18:49.853 cpumask for '\''job3'\'' is too big 00:18:49.853 Running I/O for 2 seconds... 00:18:49.853 00:18:49.853 Latency(us) 00:18:49.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.853 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:49.853 Malloc0 : 2.00 154989.77 151.36 0.00 0.00 1651.41 487.80 3083.18 00:18:49.853 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:49.853 Malloc1 : 2.00 154982.88 151.35 0.00 0.00 1651.22 422.63 3083.18 00:18:49.853 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:49.853 Malloc0 : 2.00 155001.12 151.37 0.00 0.00 1650.42 437.53 2591.65 00:18:49.853 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:49.853 Malloc1 : 2.00 154992.38 151.36 0.00 0.00 1650.33 383.54 2591.65 00:18:49.853 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:49.853 Malloc0 : 2.00 154985.03 151.35 0.00 0.00 1649.89 495.24 2085.24 00:18:49.853 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:49.853 Malloc1 : 2.00 154976.44 151.34 0.00 0.00 1649.84 452.42 2040.56 00:18:49.853 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:49.853 Malloc0 : 2.00 154968.94 151.34 0.00 0.00 1649.31 424.50 2025.66 00:18:49.853 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:49.853 Malloc1 : 2.00 155068.68 151.43 0.00 0.00 1648.04 113.57 2025.66 00:18:49.853 =================================================================================================================== 00:18:49.853 Total : 1239965.23 1210.90 0.00 0.00 1650.06 113.57 3083.18' 00:18:49.853 17:37:45 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:18:49.853 17:37:45 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:18:49.853 17:37:45 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:18:49.853 17:37:45 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:18:49.853 17:37:45 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:49.853 17:37:45 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:18:49.853 00:18:49.853 real 0m11.673s 00:18:49.853 user 0m9.278s 00:18:49.853 sys 0m2.290s 00:18:49.853 17:37:45 bdevperf_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:49.853 17:37:45 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:18:49.853 ************************************ 00:18:49.853 END TEST bdevperf_config 00:18:49.853 ************************************ 00:18:49.853 17:37:45 -- common/autotest_common.sh@1142 -- # return 0 00:18:49.853 17:37:45 -- spdk/autotest.sh@192 -- # uname -s 00:18:49.853 17:37:45 -- spdk/autotest.sh@192 -- # [[ FreeBSD == Linux ]] 00:18:49.853 17:37:45 -- spdk/autotest.sh@198 -- # uname -s 00:18:49.853 17:37:45 -- spdk/autotest.sh@198 -- # [[ FreeBSD == Linux ]] 00:18:49.853 17:37:45 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:18:49.853 17:37:45 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:18:49.853 17:37:45 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:49.853 17:37:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:49.853 17:37:45 -- common/autotest_common.sh@10 -- # set +x 00:18:49.853 ************************************ 00:18:49.853 START TEST blockdev_nvme 00:18:49.853 ************************************ 00:18:49.853 17:37:45 blockdev_nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:18:49.853 * Looking for test storage... 00:18:49.853 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:49.853 17:37:45 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:49.853 17:37:45 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:18:49.853 17:37:45 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:49.853 17:37:45 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:49.853 17:37:45 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:49.853 17:37:45 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:49.853 17:37:45 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:49.853 17:37:45 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:49.853 17:37:45 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:18:49.853 17:37:45 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:18:49.854 17:37:45 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:18:49.854 17:37:45 blockdev_nvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:18:49.854 17:37:45 blockdev_nvme -- bdev/blockdev.sh@674 -- # uname -s 00:18:49.854 17:37:45 blockdev_nvme -- bdev/blockdev.sh@674 -- # '[' FreeBSD = Linux ']' 00:18:49.854 17:37:45 blockdev_nvme -- bdev/blockdev.sh@679 -- # PRE_RESERVED_MEM=2048 00:18:49.854 17:37:45 blockdev_nvme -- bdev/blockdev.sh@682 -- # test_type=nvme 00:18:49.854 17:37:45 blockdev_nvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:18:49.854 17:37:45 blockdev_nvme -- bdev/blockdev.sh@684 -- # dek= 00:18:49.854 17:37:45 blockdev_nvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:18:49.854 17:37:45 blockdev_nvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:18:49.854 17:37:45 blockdev_nvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:18:49.854 17:37:45 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:18:49.854 17:37:45 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:18:49.854 17:37:45 blockdev_nvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:18:49.854 17:37:45 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=68273 00:18:49.854 17:37:45 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:49.854 17:37:45 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 68273 00:18:49.854 17:37:45 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:49.854 17:37:45 blockdev_nvme -- common/autotest_common.sh@829 -- # '[' -z 68273 ']' 00:18:49.854 17:37:45 blockdev_nvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.854 17:37:45 blockdev_nvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:49.854 17:37:45 blockdev_nvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.854 17:37:45 blockdev_nvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:49.854 17:37:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:49.854 [2024-07-15 17:37:45.481235] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:18:49.854 [2024-07-15 17:37:45.481486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:50.421 EAL: TSC is not safe to use in SMP mode 00:18:50.421 EAL: TSC is not invariant 00:18:50.421 [2024-07-15 17:37:46.013783] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.421 [2024-07-15 17:37:46.109550] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:50.421 [2024-07-15 17:37:46.112241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.988 17:37:46 blockdev_nvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:50.988 17:37:46 blockdev_nvme -- common/autotest_common.sh@862 -- # return 0 00:18:50.988 17:37:46 blockdev_nvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:18:50.988 17:37:46 blockdev_nvme -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:18:50.988 17:37:46 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:18:50.988 17:37:46 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:18:50.988 17:37:46 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:50.988 17:37:46 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:18:50.988 17:37:46 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.988 17:37:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:50.988 [2024-07-15 17:37:46.618817] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:50.988 17:37:46 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.988 17:37:46 blockdev_nvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:18:50.988 17:37:46 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.988 17:37:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:50.988 17:37:46 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.988 17:37:46 blockdev_nvme -- bdev/blockdev.sh@740 -- # cat 00:18:50.988 17:37:46 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:18:50.988 17:37:46 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.988 17:37:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:50.988 17:37:46 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.988 17:37:46 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:18:50.988 17:37:46 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.988 17:37:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:50.988 17:37:46 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.988 17:37:46 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:50.988 17:37:46 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.988 17:37:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:50.988 17:37:46 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.988 17:37:46 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:18:50.988 17:37:46 blockdev_nvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:18:50.988 17:37:46 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:18:50.988 17:37:46 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.988 17:37:46 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:50.988 17:37:46 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.988 17:37:46 blockdev_nvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:18:50.988 17:37:46 blockdev_nvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:18:50.988 17:37:46 blockdev_nvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "f2d98571-42d0-11ef-96ac-773515fba644"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "f2d98571-42d0-11ef-96ac-773515fba644",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:18:50.988 17:37:46 blockdev_nvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:18:50.988 17:37:46 blockdev_nvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:18:50.988 17:37:46 blockdev_nvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:18:50.988 17:37:46 blockdev_nvme -- bdev/blockdev.sh@754 -- # killprocess 68273 00:18:50.988 17:37:46 blockdev_nvme -- common/autotest_common.sh@948 -- # '[' -z 68273 ']' 00:18:50.988 17:37:46 blockdev_nvme -- common/autotest_common.sh@952 -- # kill -0 68273 00:18:50.988 17:37:46 blockdev_nvme -- common/autotest_common.sh@953 -- # uname 00:18:50.988 17:37:46 blockdev_nvme -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:18:50.988 17:37:46 blockdev_nvme -- common/autotest_common.sh@956 -- # tail -1 00:18:50.988 17:37:46 blockdev_nvme -- common/autotest_common.sh@956 -- # ps -c -o command 68273 00:18:50.988 17:37:46 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:18:50.988 killing process with pid 68273 00:18:50.989 17:37:46 blockdev_nvme -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:18:50.989 17:37:46 blockdev_nvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68273' 00:18:50.989 17:37:46 blockdev_nvme -- common/autotest_common.sh@967 -- # kill 68273 00:18:50.989 17:37:46 blockdev_nvme -- common/autotest_common.sh@972 -- # wait 68273 00:18:51.247 17:37:47 blockdev_nvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:51.248 17:37:47 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:18:51.248 17:37:47 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:18:51.248 17:37:47 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:51.248 17:37:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:51.248 ************************************ 00:18:51.248 START TEST bdev_hello_world 00:18:51.248 ************************************ 00:18:51.248 17:37:47 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:18:51.248 [2024-07-15 17:37:47.073672] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:18:51.248 [2024-07-15 17:37:47.073840] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:51.814 EAL: TSC is not safe to use in SMP mode 00:18:51.814 EAL: TSC is not invariant 00:18:51.814 [2024-07-15 17:37:47.594718] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.073 [2024-07-15 17:37:47.682054] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:52.073 [2024-07-15 17:37:47.684302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.073 [2024-07-15 17:37:47.742204] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:52.073 [2024-07-15 17:37:47.814712] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:52.073 [2024-07-15 17:37:47.814763] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:18:52.073 [2024-07-15 17:37:47.814791] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:52.073 [2024-07-15 17:37:47.815502] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:52.073 [2024-07-15 17:37:47.815827] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:52.073 [2024-07-15 17:37:47.815850] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:52.073 [2024-07-15 17:37:47.816005] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:52.073 00:18:52.073 [2024-07-15 17:37:47.816025] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:18:52.332 00:18:52.332 real 0m0.929s 00:18:52.332 user 0m0.359s 00:18:52.332 sys 0m0.568s 00:18:52.332 17:37:47 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:52.332 17:37:47 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:52.332 ************************************ 00:18:52.332 END TEST bdev_hello_world 00:18:52.332 ************************************ 00:18:52.332 17:37:48 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:18:52.332 17:37:48 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:18:52.332 17:37:48 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:52.332 17:37:48 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:52.332 17:37:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:52.332 ************************************ 00:18:52.332 START TEST bdev_bounds 00:18:52.332 ************************************ 00:18:52.332 17:37:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:18:52.332 17:37:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=68344 00:18:52.332 Process bdevio pid: 68344 00:18:52.332 17:37:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:18:52.332 17:37:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 68344' 00:18:52.332 17:37:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 2048 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:52.332 17:37:48 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 68344 00:18:52.332 17:37:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 68344 ']' 00:18:52.332 17:37:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.332 17:37:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:52.332 17:37:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.332 17:37:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:52.332 17:37:48 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:52.332 [2024-07-15 17:37:48.052883] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:18:52.332 [2024-07-15 17:37:48.053129] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 2048 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:52.898 EAL: TSC is not safe to use in SMP mode 00:18:52.898 EAL: TSC is not invariant 00:18:52.898 [2024-07-15 17:37:48.594275] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:52.898 [2024-07-15 17:37:48.674844] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:52.898 [2024-07-15 17:37:48.674902] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:18:52.898 [2024-07-15 17:37:48.674912] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:18:52.898 [2024-07-15 17:37:48.678204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:52.898 [2024-07-15 17:37:48.678327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.898 [2024-07-15 17:37:48.678322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:53.156 [2024-07-15 17:37:48.736135] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:53.415 17:37:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:53.415 17:37:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:18:53.415 17:37:49 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:18:53.415 I/O targets: 00:18:53.415 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:18:53.415 00:18:53.415 00:18:53.415 CUnit - A unit testing framework for C - Version 2.1-3 00:18:53.415 http://cunit.sourceforge.net/ 00:18:53.415 00:18:53.415 00:18:53.415 Suite: bdevio tests on: Nvme0n1 00:18:53.415 Test: blockdev write read block ...passed 00:18:53.415 Test: blockdev write zeroes read block ...passed 00:18:53.415 Test: blockdev write zeroes read no split ...passed 00:18:53.415 Test: blockdev write zeroes read split ...passed 00:18:53.415 Test: blockdev write zeroes read split partial ...passed 00:18:53.415 Test: blockdev reset ...[2024-07-15 17:37:49.163981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:18:53.415 [2024-07-15 17:37:49.165319] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:53.415 passed 00:18:53.415 Test: blockdev write read 8 blocks ...passed 00:18:53.415 Test: blockdev write read size > 128k ...passed 00:18:53.415 Test: blockdev write read invalid size ...passed 00:18:53.415 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:53.415 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:53.415 Test: blockdev write read max offset ...passed 00:18:53.415 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:53.415 Test: blockdev writev readv 8 blocks ...passed 00:18:53.415 Test: blockdev writev readv 30 x 1block ...passed 00:18:53.415 Test: blockdev writev readv block ...passed 00:18:53.415 Test: blockdev writev readv size > 128k ...passed 00:18:53.415 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:53.415 Test: blockdev comparev and writev ...[2024-07-15 17:37:49.169529] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x277945000 len:0x1000 00:18:53.415 [2024-07-15 17:37:49.169569] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:18:53.415 passed 00:18:53.415 Test: blockdev nvme passthru rw ...passed 00:18:53.415 Test: blockdev nvme passthru vendor specific ...[2024-07-15 17:37:49.170151] nvme_qpair.c: 220:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:18:53.415 passed 00:18:53.415 Test: blockdev nvme admin passthru ...[2024-07-15 17:37:49.170170] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:18:53.415 passed 00:18:53.415 Test: blockdev copy ...passed 00:18:53.415 00:18:53.415 Run Summary: Type Total Ran Passed Failed Inactive 00:18:53.416 suites 1 1 n/a 0 0 00:18:53.416 tests 23 23 23 0 0 00:18:53.416 asserts 152 152 152 0 n/a 00:18:53.416 00:18:53.416 Elapsed time = 0.031 seconds 00:18:53.416 0 00:18:53.416 17:37:49 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 68344 00:18:53.416 17:37:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 68344 ']' 00:18:53.416 17:37:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 68344 00:18:53.416 17:37:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:18:53.416 17:37:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:18:53.416 17:37:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps -c -o command 68344 00:18:53.416 17:37:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # tail -1 00:18:53.416 17:37:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=bdevio 00:18:53.416 17:37:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' bdevio = sudo ']' 00:18:53.416 killing process with pid 68344 00:18:53.416 17:37:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68344' 00:18:53.416 17:37:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 68344 00:18:53.416 17:37:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 68344 00:18:53.675 17:37:49 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:18:53.675 00:18:53.675 real 0m1.344s 00:18:53.675 user 0m2.451s 00:18:53.675 sys 0m0.660s 00:18:53.675 ************************************ 00:18:53.675 END TEST bdev_bounds 00:18:53.675 ************************************ 00:18:53.675 17:37:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:53.675 17:37:49 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:53.675 17:37:49 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:18:53.675 17:37:49 blockdev_nvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:18:53.675 17:37:49 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:53.675 17:37:49 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:53.675 17:37:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:53.675 ************************************ 00:18:53.675 START TEST bdev_nbd 00:18:53.675 ************************************ 00:18:53.675 17:37:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:18:53.675 17:37:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:18:53.675 17:37:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ FreeBSD == Linux ]] 00:18:53.675 17:37:49 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # return 0 00:18:53.675 00:18:53.675 real 0m0.004s 00:18:53.675 user 0m0.004s 00:18:53.675 sys 0m0.000s 00:18:53.675 17:37:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:53.675 ************************************ 00:18:53.675 END TEST bdev_nbd 00:18:53.675 ************************************ 00:18:53.675 17:37:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:53.675 17:37:49 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:18:53.675 17:37:49 blockdev_nvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:18:53.675 17:37:49 blockdev_nvme -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:18:53.675 skipping fio tests on NVMe due to multi-ns failures. 00:18:53.675 17:37:49 blockdev_nvme -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:18:53.675 17:37:49 blockdev_nvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:53.675 17:37:49 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:53.675 17:37:49 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:18:53.675 17:37:49 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:53.675 17:37:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:53.675 ************************************ 00:18:53.675 START TEST bdev_verify 00:18:53.675 ************************************ 00:18:53.675 17:37:49 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:53.675 [2024-07-15 17:37:49.495935] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:18:53.675 [2024-07-15 17:37:49.496181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:54.244 EAL: TSC is not safe to use in SMP mode 00:18:54.244 EAL: TSC is not invariant 00:18:54.244 [2024-07-15 17:37:50.019363] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:54.506 [2024-07-15 17:37:50.103875] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:54.506 [2024-07-15 17:37:50.103928] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:18:54.506 [2024-07-15 17:37:50.106714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.506 [2024-07-15 17:37:50.106704] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:54.506 [2024-07-15 17:37:50.165669] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:54.506 Running I/O for 5 seconds... 00:18:59.772 00:18:59.772 Latency(us) 00:18:59.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:59.772 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:59.772 Verification LBA range: start 0x0 length 0xa0000 00:18:59.772 Nvme0n1 : 5.00 21750.21 84.96 0.00 0.00 5877.01 588.34 10009.15 00:18:59.772 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:59.772 Verification LBA range: start 0xa0000 length 0xa0000 00:18:59.772 Nvme0n1 : 5.00 22251.50 86.92 0.00 0.00 5743.49 636.74 11558.18 00:18:59.772 =================================================================================================================== 00:18:59.772 Total : 44001.71 171.88 0.00 0.00 5809.49 588.34 11558.18 00:19:00.340 00:19:00.340 real 0m6.488s 00:19:00.340 user 0m11.642s 00:19:00.340 sys 0m0.568s 00:19:00.340 17:37:55 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:00.340 17:37:55 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:19:00.340 ************************************ 00:19:00.340 END TEST bdev_verify 00:19:00.340 ************************************ 00:19:00.340 17:37:56 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:00.340 17:37:56 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:00.340 17:37:56 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:19:00.340 17:37:56 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:00.340 17:37:56 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:00.340 ************************************ 00:19:00.340 START TEST bdev_verify_big_io 00:19:00.340 ************************************ 00:19:00.340 17:37:56 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:19:00.340 [2024-07-15 17:37:56.033760] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:19:00.340 [2024-07-15 17:37:56.033985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:00.908 EAL: TSC is not safe to use in SMP mode 00:19:00.908 EAL: TSC is not invariant 00:19:00.908 [2024-07-15 17:37:56.569757] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:00.908 [2024-07-15 17:37:56.652431] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:00.908 [2024-07-15 17:37:56.652505] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:19:00.908 [2024-07-15 17:37:56.655241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:00.908 [2024-07-15 17:37:56.655232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:00.908 [2024-07-15 17:37:56.713760] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:01.167 Running I/O for 5 seconds... 00:19:06.422 00:19:06.422 Latency(us) 00:19:06.422 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:06.422 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:19:06.422 Verification LBA range: start 0x0 length 0xa000 00:19:06.422 Nvme0n1 : 5.01 8275.71 517.23 0.00 0.00 15384.66 666.53 26810.22 00:19:06.422 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:19:06.422 Verification LBA range: start 0xa000 length 0xa000 00:19:06.422 Nvme0n1 : 5.01 8047.52 502.97 0.00 0.00 15821.47 95.88 24307.93 00:19:06.422 =================================================================================================================== 00:19:06.422 Total : 16323.23 1020.20 0.00 0.00 15600.06 95.88 26810.22 00:19:09.708 00:19:09.708 real 0m9.069s 00:19:09.708 user 0m16.771s 00:19:09.708 sys 0m0.591s 00:19:09.708 ************************************ 00:19:09.708 END TEST bdev_verify_big_io 00:19:09.708 ************************************ 00:19:09.708 17:38:05 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:09.708 17:38:05 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:19:09.708 17:38:05 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:09.708 17:38:05 blockdev_nvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:09.708 17:38:05 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:19:09.708 17:38:05 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:09.708 17:38:05 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:09.708 ************************************ 00:19:09.708 START TEST bdev_write_zeroes 00:19:09.708 ************************************ 00:19:09.708 17:38:05 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:09.708 [2024-07-15 17:38:05.143906] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:19:09.708 [2024-07-15 17:38:05.144131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:09.966 EAL: TSC is not safe to use in SMP mode 00:19:09.966 EAL: TSC is not invariant 00:19:09.966 [2024-07-15 17:38:05.667835] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.966 [2024-07-15 17:38:05.747535] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:09.966 [2024-07-15 17:38:05.749773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.224 [2024-07-15 17:38:05.808359] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:10.224 Running I/O for 1 seconds... 00:19:11.157 00:19:11.157 Latency(us) 00:19:11.157 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.157 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:19:11.157 Nvme0n1 : 1.00 74394.55 290.60 0.00 0.00 1719.06 342.58 11319.87 00:19:11.157 =================================================================================================================== 00:19:11.157 Total : 74394.55 290.60 0.00 0.00 1719.06 342.58 11319.87 00:19:11.447 00:19:11.447 real 0m1.935s 00:19:11.447 user 0m1.358s 00:19:11.447 sys 0m0.574s 00:19:11.447 17:38:07 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:11.447 17:38:07 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:19:11.447 ************************************ 00:19:11.447 END TEST bdev_write_zeroes 00:19:11.447 ************************************ 00:19:11.447 17:38:07 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:11.447 17:38:07 blockdev_nvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:11.447 17:38:07 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:19:11.447 17:38:07 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:11.447 17:38:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:11.447 ************************************ 00:19:11.447 START TEST bdev_json_nonenclosed 00:19:11.447 ************************************ 00:19:11.447 17:38:07 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:11.447 [2024-07-15 17:38:07.123617] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:19:11.447 [2024-07-15 17:38:07.123804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:12.029 EAL: TSC is not safe to use in SMP mode 00:19:12.029 EAL: TSC is not invariant 00:19:12.029 [2024-07-15 17:38:07.652765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.029 [2024-07-15 17:38:07.740951] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:12.029 [2024-07-15 17:38:07.743197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.029 [2024-07-15 17:38:07.743238] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:19:12.029 [2024-07-15 17:38:07.743249] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:12.029 [2024-07-15 17:38:07.743257] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:12.289 00:19:12.289 real 0m0.744s 00:19:12.289 user 0m0.190s 00:19:12.289 sys 0m0.552s 00:19:12.289 17:38:07 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:19:12.289 17:38:07 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:12.289 17:38:07 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:19:12.289 ************************************ 00:19:12.289 END TEST bdev_json_nonenclosed 00:19:12.289 ************************************ 00:19:12.289 17:38:07 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:19:12.289 17:38:07 blockdev_nvme -- bdev/blockdev.sh@782 -- # true 00:19:12.289 17:38:07 blockdev_nvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:12.289 17:38:07 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:19:12.289 17:38:07 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:12.289 17:38:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:12.289 ************************************ 00:19:12.289 START TEST bdev_json_nonarray 00:19:12.289 ************************************ 00:19:12.289 17:38:07 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:19:12.289 [2024-07-15 17:38:07.914415] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:19:12.289 [2024-07-15 17:38:07.914614] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:12.856 EAL: TSC is not safe to use in SMP mode 00:19:12.856 EAL: TSC is not invariant 00:19:12.856 [2024-07-15 17:38:08.459612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.856 [2024-07-15 17:38:08.547219] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:12.856 [2024-07-15 17:38:08.549520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.856 [2024-07-15 17:38:08.549578] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:19:12.856 [2024-07-15 17:38:08.549604] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:19:12.856 [2024-07-15 17:38:08.549611] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:19:12.856 00:19:12.856 real 0m0.759s 00:19:12.856 user 0m0.171s 00:19:12.856 sys 0m0.581s 00:19:12.856 17:38:08 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:19:12.856 17:38:08 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:12.856 17:38:08 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:19:12.856 ************************************ 00:19:12.856 END TEST bdev_json_nonarray 00:19:12.856 ************************************ 00:19:13.115 17:38:08 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:19:13.115 17:38:08 blockdev_nvme -- bdev/blockdev.sh@785 -- # true 00:19:13.115 17:38:08 blockdev_nvme -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:19:13.115 17:38:08 blockdev_nvme -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:19:13.115 17:38:08 blockdev_nvme -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:19:13.115 17:38:08 blockdev_nvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:19:13.115 17:38:08 blockdev_nvme -- bdev/blockdev.sh@811 -- # cleanup 00:19:13.115 17:38:08 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:19:13.115 17:38:08 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:13.115 17:38:08 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:19:13.115 17:38:08 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:19:13.115 17:38:08 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:19:13.115 17:38:08 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:19:13.115 00:19:13.115 real 0m23.409s 00:19:13.115 user 0m34.741s 00:19:13.115 sys 0m5.111s 00:19:13.115 17:38:08 blockdev_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:13.115 17:38:08 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:19:13.115 ************************************ 00:19:13.115 END TEST blockdev_nvme 00:19:13.115 ************************************ 00:19:13.115 17:38:08 -- common/autotest_common.sh@1142 -- # return 0 00:19:13.115 17:38:08 -- spdk/autotest.sh@213 -- # uname -s 00:19:13.115 17:38:08 -- spdk/autotest.sh@213 -- # [[ FreeBSD == Linux ]] 00:19:13.115 17:38:08 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:19:13.115 17:38:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:13.115 17:38:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:13.115 17:38:08 -- common/autotest_common.sh@10 -- # set +x 00:19:13.115 ************************************ 00:19:13.115 START TEST nvme 00:19:13.115 ************************************ 00:19:13.115 17:38:08 nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:19:13.115 * Looking for test storage... 00:19:13.115 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:19:13.115 17:38:08 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:13.375 hw.nic_uio.bdfs="0:16:0" 00:19:13.375 17:38:09 nvme -- nvme/nvme.sh@79 -- # uname 00:19:13.635 17:38:09 nvme -- nvme/nvme.sh@79 -- # '[' FreeBSD = Linux ']' 00:19:13.635 17:38:09 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:19:13.635 17:38:09 nvme -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:19:13.635 17:38:09 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:13.635 17:38:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:13.635 ************************************ 00:19:13.635 START TEST nvme_reset 00:19:13.635 ************************************ 00:19:13.635 17:38:09 nvme.nvme_reset -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:19:14.203 EAL: TSC is not safe to use in SMP mode 00:19:14.203 EAL: TSC is not invariant 00:19:14.203 [2024-07-15 17:38:09.813639] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:14.203 Initializing NVMe Controllers 00:19:14.203 Skipping QEMU NVMe SSD at 0000:00:10.0 00:19:14.203 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:19:14.203 00:19:14.203 real 0m0.639s 00:19:14.203 user 0m0.000s 00:19:14.203 sys 0m0.647s 00:19:14.203 17:38:09 nvme.nvme_reset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:14.203 ************************************ 00:19:14.203 END TEST nvme_reset 00:19:14.203 ************************************ 00:19:14.203 17:38:09 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:19:14.203 17:38:09 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:14.203 17:38:09 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:19:14.203 17:38:09 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:14.203 17:38:09 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:14.203 17:38:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:14.203 ************************************ 00:19:14.203 START TEST nvme_identify 00:19:14.203 ************************************ 00:19:14.203 17:38:09 nvme.nvme_identify -- common/autotest_common.sh@1123 -- # nvme_identify 00:19:14.203 17:38:09 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:19:14.203 17:38:09 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:19:14.203 17:38:09 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:19:14.203 17:38:09 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:19:14.203 17:38:09 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:19:14.203 17:38:09 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:19:14.203 17:38:09 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:14.203 17:38:09 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:14.203 17:38:09 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:19:14.203 17:38:09 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:19:14.203 17:38:09 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:19:14.203 17:38:09 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:19:14.772 EAL: TSC is not safe to use in SMP mode 00:19:14.772 EAL: TSC is not invariant 00:19:14.772 [2024-07-15 17:38:10.549348] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:14.772 ===================================================== 00:19:14.772 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:14.772 ===================================================== 00:19:14.772 Controller Capabilities/Features 00:19:14.772 ================================ 00:19:14.772 Vendor ID: 1b36 00:19:14.772 Subsystem Vendor ID: 1af4 00:19:14.772 Serial Number: 12340 00:19:14.772 Model Number: QEMU NVMe Ctrl 00:19:14.772 Firmware Version: 8.0.0 00:19:14.772 Recommended Arb Burst: 6 00:19:14.772 IEEE OUI Identifier: 00 54 52 00:19:14.772 Multi-path I/O 00:19:14.772 May have multiple subsystem ports: No 00:19:14.772 May have multiple controllers: No 00:19:14.772 Associated with SR-IOV VF: No 00:19:14.772 Max Data Transfer Size: 524288 00:19:14.772 Max Number of Namespaces: 256 00:19:14.772 Max Number of I/O Queues: 64 00:19:14.772 NVMe Specification Version (VS): 1.4 00:19:14.772 NVMe Specification Version (Identify): 1.4 00:19:14.772 Maximum Queue Entries: 2048 00:19:14.772 Contiguous Queues Required: Yes 00:19:14.772 Arbitration Mechanisms Supported 00:19:14.772 Weighted Round Robin: Not Supported 00:19:14.772 Vendor Specific: Not Supported 00:19:14.772 Reset Timeout: 7500 ms 00:19:14.772 Doorbell Stride: 4 bytes 00:19:14.772 NVM Subsystem Reset: Not Supported 00:19:14.772 Command Sets Supported 00:19:14.772 NVM Command Set: Supported 00:19:14.772 Boot Partition: Not Supported 00:19:14.772 Memory Page Size Minimum: 4096 bytes 00:19:14.772 Memory Page Size Maximum: 65536 bytes 00:19:14.772 Persistent Memory Region: Not Supported 00:19:14.772 Optional Asynchronous Events Supported 00:19:14.772 Namespace Attribute Notices: Supported 00:19:14.772 Firmware Activation Notices: Not Supported 00:19:14.772 ANA Change Notices: Not Supported 00:19:14.772 PLE Aggregate Log Change Notices: Not Supported 00:19:14.772 LBA Status Info Alert Notices: Not Supported 00:19:14.772 EGE Aggregate Log Change Notices: Not Supported 00:19:14.772 Normal NVM Subsystem Shutdown event: Not Supported 00:19:14.772 Zone Descriptor Change Notices: Not Supported 00:19:14.772 Discovery Log Change Notices: Not Supported 00:19:14.772 Controller Attributes 00:19:14.772 128-bit Host Identifier: Not Supported 00:19:14.772 Non-Operational Permissive Mode: Not Supported 00:19:14.772 NVM Sets: Not Supported 00:19:14.772 Read Recovery Levels: Not Supported 00:19:14.772 Endurance Groups: Not Supported 00:19:14.772 Predictable Latency Mode: Not Supported 00:19:14.772 Traffic Based Keep ALive: Not Supported 00:19:14.772 Namespace Granularity: Not Supported 00:19:14.772 SQ Associations: Not Supported 00:19:14.772 UUID List: Not Supported 00:19:14.773 Multi-Domain Subsystem: Not Supported 00:19:14.773 Fixed Capacity Management: Not Supported 00:19:14.773 Variable Capacity Management: Not Supported 00:19:14.773 Delete Endurance Group: Not Supported 00:19:14.773 Delete NVM Set: Not Supported 00:19:14.773 Extended LBA Formats Supported: Supported 00:19:14.773 Flexible Data Placement Supported: Not Supported 00:19:14.773 00:19:14.773 Controller Memory Buffer Support 00:19:14.773 ================================ 00:19:14.773 Supported: No 00:19:14.773 00:19:14.773 Persistent Memory Region Support 00:19:14.773 ================================ 00:19:14.773 Supported: No 00:19:14.773 00:19:14.773 Admin Command Set Attributes 00:19:14.773 ============================ 00:19:14.773 Security Send/Receive: Not Supported 00:19:14.773 Format NVM: Supported 00:19:14.773 Firmware Activate/Download: Not Supported 00:19:14.773 Namespace Management: Supported 00:19:14.773 Device Self-Test: Not Supported 00:19:14.773 Directives: Supported 00:19:14.773 NVMe-MI: Not Supported 00:19:14.773 Virtualization Management: Not Supported 00:19:14.773 Doorbell Buffer Config: Supported 00:19:14.773 Get LBA Status Capability: Not Supported 00:19:14.773 Command & Feature Lockdown Capability: Not Supported 00:19:14.773 Abort Command Limit: 4 00:19:14.773 Async Event Request Limit: 4 00:19:14.773 Number of Firmware Slots: N/A 00:19:14.773 Firmware Slot 1 Read-Only: N/A 00:19:14.773 Firmware Activation Without Reset: N/A 00:19:14.773 Multiple Update Detection Support: N/A 00:19:14.773 Firmware Update Granularity: No Information Provided 00:19:14.773 Per-Namespace SMART Log: Yes 00:19:14.773 Asymmetric Namespace Access Log Page: Not Supported 00:19:14.773 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:19:14.773 Command Effects Log Page: Supported 00:19:14.773 Get Log Page Extended Data: Supported 00:19:14.773 Telemetry Log Pages: Not Supported 00:19:14.773 Persistent Event Log Pages: Not Supported 00:19:14.773 Supported Log Pages Log Page: May Support 00:19:14.773 Commands Supported & Effects Log Page: Not Supported 00:19:14.773 Feature Identifiers & Effects Log Page:May Support 00:19:14.773 NVMe-MI Commands & Effects Log Page: May Support 00:19:14.773 Data Area 4 for Telemetry Log: Not Supported 00:19:14.773 Error Log Page Entries Supported: 1 00:19:14.773 Keep Alive: Not Supported 00:19:14.773 00:19:14.773 NVM Command Set Attributes 00:19:14.773 ========================== 00:19:14.773 Submission Queue Entry Size 00:19:14.773 Max: 64 00:19:14.773 Min: 64 00:19:14.773 Completion Queue Entry Size 00:19:14.773 Max: 16 00:19:14.773 Min: 16 00:19:14.773 Number of Namespaces: 256 00:19:14.773 Compare Command: Supported 00:19:14.773 Write Uncorrectable Command: Not Supported 00:19:14.773 Dataset Management Command: Supported 00:19:14.773 Write Zeroes Command: Supported 00:19:14.773 Set Features Save Field: Supported 00:19:14.773 Reservations: Not Supported 00:19:14.773 Timestamp: Supported 00:19:14.773 Copy: Supported 00:19:14.773 Volatile Write Cache: Present 00:19:14.773 Atomic Write Unit (Normal): 1 00:19:14.773 Atomic Write Unit (PFail): 1 00:19:14.773 Atomic Compare & Write Unit: 1 00:19:14.773 Fused Compare & Write: Not Supported 00:19:14.773 Scatter-Gather List 00:19:14.773 SGL Command Set: Supported 00:19:14.773 SGL Keyed: Not Supported 00:19:14.773 SGL Bit Bucket Descriptor: Not Supported 00:19:14.773 SGL Metadata Pointer: Not Supported 00:19:14.773 Oversized SGL: Not Supported 00:19:14.773 SGL Metadata Address: Not Supported 00:19:14.773 SGL Offset: Not Supported 00:19:14.773 Transport SGL Data Block: Not Supported 00:19:14.773 Replay Protected Memory Block: Not Supported 00:19:14.773 00:19:14.773 Firmware Slot Information 00:19:14.773 ========================= 00:19:14.773 Active slot: 1 00:19:14.773 Slot 1 Firmware Revision: 1.0 00:19:14.773 00:19:14.773 00:19:14.773 Commands Supported and Effects 00:19:14.773 ============================== 00:19:14.773 Admin Commands 00:19:14.773 -------------- 00:19:14.773 Delete I/O Submission Queue (00h): Supported 00:19:14.773 Create I/O Submission Queue (01h): Supported 00:19:14.773 Get Log Page (02h): Supported 00:19:14.773 Delete I/O Completion Queue (04h): Supported 00:19:14.773 Create I/O Completion Queue (05h): Supported 00:19:14.773 Identify (06h): Supported 00:19:14.773 Abort (08h): Supported 00:19:14.773 Set Features (09h): Supported 00:19:14.773 Get Features (0Ah): Supported 00:19:14.773 Asynchronous Event Request (0Ch): Supported 00:19:14.773 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:14.773 Directive Send (19h): Supported 00:19:14.773 Directive Receive (1Ah): Supported 00:19:14.773 Virtualization Management (1Ch): Supported 00:19:14.773 Doorbell Buffer Config (7Ch): Supported 00:19:14.773 Format NVM (80h): Supported LBA-Change 00:19:14.773 I/O Commands 00:19:14.773 ------------ 00:19:14.773 Flush (00h): Supported LBA-Change 00:19:14.773 Write (01h): Supported LBA-Change 00:19:14.773 Read (02h): Supported 00:19:14.773 Compare (05h): Supported 00:19:14.773 Write Zeroes (08h): Supported LBA-Change 00:19:14.773 Dataset Management (09h): Supported LBA-Change 00:19:14.773 Unknown (0Ch): Supported 00:19:14.773 Unknown (12h): Supported 00:19:14.773 Copy (19h): Supported LBA-Change 00:19:14.773 Unknown (1Dh): Supported LBA-Change 00:19:14.773 00:19:14.773 Error Log 00:19:14.773 ========= 00:19:14.773 00:19:14.773 Arbitration 00:19:14.773 =========== 00:19:14.773 Arbitration Burst: no limit 00:19:14.773 00:19:14.773 Power Management 00:19:14.773 ================ 00:19:14.773 Number of Power States: 1 00:19:14.773 Current Power State: Power State #0 00:19:14.773 Power State #0: 00:19:14.773 Max Power: 25.00 W 00:19:14.773 Non-Operational State: Operational 00:19:14.773 Entry Latency: 16 microseconds 00:19:14.773 Exit Latency: 4 microseconds 00:19:14.773 Relative Read Throughput: 0 00:19:14.773 Relative Read Latency: 0 00:19:14.773 Relative Write Throughput: 0 00:19:14.773 Relative Write Latency: 0 00:19:14.773 Idle Power: Not Reported 00:19:14.773 Active Power: Not Reported 00:19:14.773 Non-Operational Permissive Mode: Not Supported 00:19:14.773 00:19:14.773 Health Information 00:19:14.773 ================== 00:19:14.773 Critical Warnings: 00:19:14.773 Available Spare Space: OK 00:19:14.773 Temperature: OK 00:19:14.773 Device Reliability: OK 00:19:14.773 Read Only: No 00:19:14.773 Volatile Memory Backup: OK 00:19:14.773 Current Temperature: 323 Kelvin (50 Celsius) 00:19:14.773 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:14.773 Available Spare: 0% 00:19:14.773 Available Spare Threshold: 0% 00:19:14.773 Life Percentage Used: 0% 00:19:14.773 Data Units Read: 12246 00:19:14.773 Data Units Written: 12231 00:19:14.773 Host Read Commands: 302166 00:19:14.773 Host Write Commands: 302018 00:19:14.773 Controller Busy Time: 0 minutes 00:19:14.773 Power Cycles: 0 00:19:14.773 Power On Hours: 0 hours 00:19:14.773 Unsafe Shutdowns: 0 00:19:14.773 Unrecoverable Media Errors: 0 00:19:14.773 Lifetime Error Log Entries: 0 00:19:14.773 Warning Temperature Time: 0 minutes 00:19:14.773 Critical Temperature Time: 0 minutes 00:19:14.773 00:19:14.773 Number of Queues 00:19:14.773 ================ 00:19:14.773 Number of I/O Submission Queues: 64 00:19:14.773 Number of I/O Completion Queues: 64 00:19:14.773 00:19:14.773 ZNS Specific Controller Data 00:19:14.773 ============================ 00:19:14.773 Zone Append Size Limit: 0 00:19:14.773 00:19:14.773 00:19:14.773 Active Namespaces 00:19:14.773 ================= 00:19:14.773 Namespace ID:1 00:19:14.773 Error Recovery Timeout: Unlimited 00:19:14.773 Command Set Identifier: NVM (00h) 00:19:14.773 Deallocate: Supported 00:19:14.773 Deallocated/Unwritten Error: Supported 00:19:14.773 Deallocated Read Value: All 0x00 00:19:14.773 Deallocate in Write Zeroes: Not Supported 00:19:14.773 Deallocated Guard Field: 0xFFFF 00:19:14.773 Flush: Supported 00:19:14.773 Reservation: Not Supported 00:19:14.773 Namespace Sharing Capabilities: Private 00:19:14.773 Size (in LBAs): 1310720 (5GiB) 00:19:14.773 Capacity (in LBAs): 1310720 (5GiB) 00:19:14.773 Utilization (in LBAs): 1310720 (5GiB) 00:19:14.773 Thin Provisioning: Not Supported 00:19:14.773 Per-NS Atomic Units: No 00:19:14.773 Maximum Single Source Range Length: 128 00:19:14.773 Maximum Copy Length: 128 00:19:14.773 Maximum Source Range Count: 128 00:19:14.773 NGUID/EUI64 Never Reused: No 00:19:14.773 Namespace Write Protected: No 00:19:14.773 Number of LBA Formats: 8 00:19:14.773 Current LBA Format: LBA Format #04 00:19:14.773 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:14.773 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:14.773 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:14.773 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:14.773 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:14.773 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:14.773 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:14.773 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:14.773 00:19:14.773 NVM Specific Namespace Data 00:19:14.773 =========================== 00:19:14.774 Logical Block Storage Tag Mask: 0 00:19:14.774 Protection Information Capabilities: 00:19:14.774 16b Guard Protection Information Storage Tag Support: No 00:19:14.774 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:14.774 Storage Tag Check Read Support: No 00:19:14.774 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:14.774 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:14.774 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:14.774 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:14.774 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:14.774 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:14.774 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:14.774 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:14.774 17:38:10 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:19:14.774 17:38:10 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:19:15.377 EAL: TSC is not safe to use in SMP mode 00:19:15.377 EAL: TSC is not invariant 00:19:15.377 [2024-07-15 17:38:11.201279] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:15.637 ===================================================== 00:19:15.637 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:15.637 ===================================================== 00:19:15.637 Controller Capabilities/Features 00:19:15.637 ================================ 00:19:15.637 Vendor ID: 1b36 00:19:15.637 Subsystem Vendor ID: 1af4 00:19:15.637 Serial Number: 12340 00:19:15.637 Model Number: QEMU NVMe Ctrl 00:19:15.637 Firmware Version: 8.0.0 00:19:15.637 Recommended Arb Burst: 6 00:19:15.637 IEEE OUI Identifier: 00 54 52 00:19:15.637 Multi-path I/O 00:19:15.637 May have multiple subsystem ports: No 00:19:15.637 May have multiple controllers: No 00:19:15.637 Associated with SR-IOV VF: No 00:19:15.637 Max Data Transfer Size: 524288 00:19:15.637 Max Number of Namespaces: 256 00:19:15.637 Max Number of I/O Queues: 64 00:19:15.637 NVMe Specification Version (VS): 1.4 00:19:15.637 NVMe Specification Version (Identify): 1.4 00:19:15.637 Maximum Queue Entries: 2048 00:19:15.637 Contiguous Queues Required: Yes 00:19:15.637 Arbitration Mechanisms Supported 00:19:15.637 Weighted Round Robin: Not Supported 00:19:15.637 Vendor Specific: Not Supported 00:19:15.637 Reset Timeout: 7500 ms 00:19:15.637 Doorbell Stride: 4 bytes 00:19:15.637 NVM Subsystem Reset: Not Supported 00:19:15.637 Command Sets Supported 00:19:15.637 NVM Command Set: Supported 00:19:15.637 Boot Partition: Not Supported 00:19:15.637 Memory Page Size Minimum: 4096 bytes 00:19:15.637 Memory Page Size Maximum: 65536 bytes 00:19:15.637 Persistent Memory Region: Not Supported 00:19:15.637 Optional Asynchronous Events Supported 00:19:15.637 Namespace Attribute Notices: Supported 00:19:15.637 Firmware Activation Notices: Not Supported 00:19:15.637 ANA Change Notices: Not Supported 00:19:15.637 PLE Aggregate Log Change Notices: Not Supported 00:19:15.637 LBA Status Info Alert Notices: Not Supported 00:19:15.637 EGE Aggregate Log Change Notices: Not Supported 00:19:15.637 Normal NVM Subsystem Shutdown event: Not Supported 00:19:15.637 Zone Descriptor Change Notices: Not Supported 00:19:15.637 Discovery Log Change Notices: Not Supported 00:19:15.637 Controller Attributes 00:19:15.637 128-bit Host Identifier: Not Supported 00:19:15.637 Non-Operational Permissive Mode: Not Supported 00:19:15.637 NVM Sets: Not Supported 00:19:15.637 Read Recovery Levels: Not Supported 00:19:15.637 Endurance Groups: Not Supported 00:19:15.637 Predictable Latency Mode: Not Supported 00:19:15.637 Traffic Based Keep ALive: Not Supported 00:19:15.637 Namespace Granularity: Not Supported 00:19:15.637 SQ Associations: Not Supported 00:19:15.637 UUID List: Not Supported 00:19:15.637 Multi-Domain Subsystem: Not Supported 00:19:15.637 Fixed Capacity Management: Not Supported 00:19:15.637 Variable Capacity Management: Not Supported 00:19:15.637 Delete Endurance Group: Not Supported 00:19:15.637 Delete NVM Set: Not Supported 00:19:15.637 Extended LBA Formats Supported: Supported 00:19:15.637 Flexible Data Placement Supported: Not Supported 00:19:15.637 00:19:15.637 Controller Memory Buffer Support 00:19:15.637 ================================ 00:19:15.637 Supported: No 00:19:15.637 00:19:15.637 Persistent Memory Region Support 00:19:15.637 ================================ 00:19:15.637 Supported: No 00:19:15.637 00:19:15.637 Admin Command Set Attributes 00:19:15.637 ============================ 00:19:15.637 Security Send/Receive: Not Supported 00:19:15.637 Format NVM: Supported 00:19:15.637 Firmware Activate/Download: Not Supported 00:19:15.637 Namespace Management: Supported 00:19:15.637 Device Self-Test: Not Supported 00:19:15.637 Directives: Supported 00:19:15.637 NVMe-MI: Not Supported 00:19:15.637 Virtualization Management: Not Supported 00:19:15.637 Doorbell Buffer Config: Supported 00:19:15.637 Get LBA Status Capability: Not Supported 00:19:15.637 Command & Feature Lockdown Capability: Not Supported 00:19:15.637 Abort Command Limit: 4 00:19:15.637 Async Event Request Limit: 4 00:19:15.637 Number of Firmware Slots: N/A 00:19:15.637 Firmware Slot 1 Read-Only: N/A 00:19:15.637 Firmware Activation Without Reset: N/A 00:19:15.637 Multiple Update Detection Support: N/A 00:19:15.637 Firmware Update Granularity: No Information Provided 00:19:15.637 Per-Namespace SMART Log: Yes 00:19:15.637 Asymmetric Namespace Access Log Page: Not Supported 00:19:15.637 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:19:15.637 Command Effects Log Page: Supported 00:19:15.637 Get Log Page Extended Data: Supported 00:19:15.637 Telemetry Log Pages: Not Supported 00:19:15.637 Persistent Event Log Pages: Not Supported 00:19:15.637 Supported Log Pages Log Page: May Support 00:19:15.637 Commands Supported & Effects Log Page: Not Supported 00:19:15.637 Feature Identifiers & Effects Log Page:May Support 00:19:15.637 NVMe-MI Commands & Effects Log Page: May Support 00:19:15.637 Data Area 4 for Telemetry Log: Not Supported 00:19:15.637 Error Log Page Entries Supported: 1 00:19:15.637 Keep Alive: Not Supported 00:19:15.637 00:19:15.637 NVM Command Set Attributes 00:19:15.637 ========================== 00:19:15.637 Submission Queue Entry Size 00:19:15.637 Max: 64 00:19:15.637 Min: 64 00:19:15.637 Completion Queue Entry Size 00:19:15.637 Max: 16 00:19:15.637 Min: 16 00:19:15.637 Number of Namespaces: 256 00:19:15.637 Compare Command: Supported 00:19:15.637 Write Uncorrectable Command: Not Supported 00:19:15.637 Dataset Management Command: Supported 00:19:15.637 Write Zeroes Command: Supported 00:19:15.637 Set Features Save Field: Supported 00:19:15.637 Reservations: Not Supported 00:19:15.637 Timestamp: Supported 00:19:15.637 Copy: Supported 00:19:15.637 Volatile Write Cache: Present 00:19:15.637 Atomic Write Unit (Normal): 1 00:19:15.637 Atomic Write Unit (PFail): 1 00:19:15.637 Atomic Compare & Write Unit: 1 00:19:15.637 Fused Compare & Write: Not Supported 00:19:15.637 Scatter-Gather List 00:19:15.637 SGL Command Set: Supported 00:19:15.637 SGL Keyed: Not Supported 00:19:15.637 SGL Bit Bucket Descriptor: Not Supported 00:19:15.637 SGL Metadata Pointer: Not Supported 00:19:15.637 Oversized SGL: Not Supported 00:19:15.637 SGL Metadata Address: Not Supported 00:19:15.637 SGL Offset: Not Supported 00:19:15.637 Transport SGL Data Block: Not Supported 00:19:15.637 Replay Protected Memory Block: Not Supported 00:19:15.637 00:19:15.637 Firmware Slot Information 00:19:15.638 ========================= 00:19:15.638 Active slot: 1 00:19:15.638 Slot 1 Firmware Revision: 1.0 00:19:15.638 00:19:15.638 00:19:15.638 Commands Supported and Effects 00:19:15.638 ============================== 00:19:15.638 Admin Commands 00:19:15.638 -------------- 00:19:15.638 Delete I/O Submission Queue (00h): Supported 00:19:15.638 Create I/O Submission Queue (01h): Supported 00:19:15.638 Get Log Page (02h): Supported 00:19:15.638 Delete I/O Completion Queue (04h): Supported 00:19:15.638 Create I/O Completion Queue (05h): Supported 00:19:15.638 Identify (06h): Supported 00:19:15.638 Abort (08h): Supported 00:19:15.638 Set Features (09h): Supported 00:19:15.638 Get Features (0Ah): Supported 00:19:15.638 Asynchronous Event Request (0Ch): Supported 00:19:15.638 Namespace Attachment (15h): Supported NS-Inventory-Change 00:19:15.638 Directive Send (19h): Supported 00:19:15.638 Directive Receive (1Ah): Supported 00:19:15.638 Virtualization Management (1Ch): Supported 00:19:15.638 Doorbell Buffer Config (7Ch): Supported 00:19:15.638 Format NVM (80h): Supported LBA-Change 00:19:15.638 I/O Commands 00:19:15.638 ------------ 00:19:15.638 Flush (00h): Supported LBA-Change 00:19:15.638 Write (01h): Supported LBA-Change 00:19:15.638 Read (02h): Supported 00:19:15.638 Compare (05h): Supported 00:19:15.638 Write Zeroes (08h): Supported LBA-Change 00:19:15.638 Dataset Management (09h): Supported LBA-Change 00:19:15.638 Unknown (0Ch): Supported 00:19:15.638 Unknown (12h): Supported 00:19:15.638 Copy (19h): Supported LBA-Change 00:19:15.638 Unknown (1Dh): Supported LBA-Change 00:19:15.638 00:19:15.638 Error Log 00:19:15.638 ========= 00:19:15.638 00:19:15.638 Arbitration 00:19:15.638 =========== 00:19:15.638 Arbitration Burst: no limit 00:19:15.638 00:19:15.638 Power Management 00:19:15.638 ================ 00:19:15.638 Number of Power States: 1 00:19:15.638 Current Power State: Power State #0 00:19:15.638 Power State #0: 00:19:15.638 Max Power: 25.00 W 00:19:15.638 Non-Operational State: Operational 00:19:15.638 Entry Latency: 16 microseconds 00:19:15.638 Exit Latency: 4 microseconds 00:19:15.638 Relative Read Throughput: 0 00:19:15.638 Relative Read Latency: 0 00:19:15.638 Relative Write Throughput: 0 00:19:15.638 Relative Write Latency: 0 00:19:15.638 Idle Power: Not Reported 00:19:15.638 Active Power: Not Reported 00:19:15.638 Non-Operational Permissive Mode: Not Supported 00:19:15.638 00:19:15.638 Health Information 00:19:15.638 ================== 00:19:15.638 Critical Warnings: 00:19:15.638 Available Spare Space: OK 00:19:15.638 Temperature: OK 00:19:15.638 Device Reliability: OK 00:19:15.638 Read Only: No 00:19:15.638 Volatile Memory Backup: OK 00:19:15.638 Current Temperature: 323 Kelvin (50 Celsius) 00:19:15.638 Temperature Threshold: 343 Kelvin (70 Celsius) 00:19:15.638 Available Spare: 0% 00:19:15.638 Available Spare Threshold: 0% 00:19:15.638 Life Percentage Used: 0% 00:19:15.638 Data Units Read: 12246 00:19:15.638 Data Units Written: 12231 00:19:15.638 Host Read Commands: 302166 00:19:15.638 Host Write Commands: 302018 00:19:15.638 Controller Busy Time: 0 minutes 00:19:15.638 Power Cycles: 0 00:19:15.638 Power On Hours: 0 hours 00:19:15.638 Unsafe Shutdowns: 0 00:19:15.638 Unrecoverable Media Errors: 0 00:19:15.638 Lifetime Error Log Entries: 0 00:19:15.638 Warning Temperature Time: 0 minutes 00:19:15.638 Critical Temperature Time: 0 minutes 00:19:15.638 00:19:15.638 Number of Queues 00:19:15.638 ================ 00:19:15.638 Number of I/O Submission Queues: 64 00:19:15.638 Number of I/O Completion Queues: 64 00:19:15.638 00:19:15.638 ZNS Specific Controller Data 00:19:15.638 ============================ 00:19:15.638 Zone Append Size Limit: 0 00:19:15.638 00:19:15.638 00:19:15.638 Active Namespaces 00:19:15.638 ================= 00:19:15.638 Namespace ID:1 00:19:15.638 Error Recovery Timeout: Unlimited 00:19:15.638 Command Set Identifier: NVM (00h) 00:19:15.638 Deallocate: Supported 00:19:15.638 Deallocated/Unwritten Error: Supported 00:19:15.638 Deallocated Read Value: All 0x00 00:19:15.638 Deallocate in Write Zeroes: Not Supported 00:19:15.638 Deallocated Guard Field: 0xFFFF 00:19:15.638 Flush: Supported 00:19:15.638 Reservation: Not Supported 00:19:15.638 Namespace Sharing Capabilities: Private 00:19:15.638 Size (in LBAs): 1310720 (5GiB) 00:19:15.638 Capacity (in LBAs): 1310720 (5GiB) 00:19:15.638 Utilization (in LBAs): 1310720 (5GiB) 00:19:15.638 Thin Provisioning: Not Supported 00:19:15.638 Per-NS Atomic Units: No 00:19:15.638 Maximum Single Source Range Length: 128 00:19:15.638 Maximum Copy Length: 128 00:19:15.638 Maximum Source Range Count: 128 00:19:15.638 NGUID/EUI64 Never Reused: No 00:19:15.638 Namespace Write Protected: No 00:19:15.638 Number of LBA Formats: 8 00:19:15.638 Current LBA Format: LBA Format #04 00:19:15.638 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:15.638 LBA Format #01: Data Size: 512 Metadata Size: 8 00:19:15.638 LBA Format #02: Data Size: 512 Metadata Size: 16 00:19:15.638 LBA Format #03: Data Size: 512 Metadata Size: 64 00:19:15.638 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:19:15.638 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:19:15.638 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:19:15.638 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:19:15.638 00:19:15.638 NVM Specific Namespace Data 00:19:15.638 =========================== 00:19:15.638 Logical Block Storage Tag Mask: 0 00:19:15.638 Protection Information Capabilities: 00:19:15.638 16b Guard Protection Information Storage Tag Support: No 00:19:15.638 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:19:15.638 Storage Tag Check Read Support: No 00:19:15.638 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:15.638 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:15.638 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:15.638 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:15.638 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:15.638 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:15.638 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:15.638 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:19:15.638 00:19:15.638 real 0m1.348s 00:19:15.638 user 0m0.046s 00:19:15.638 sys 0m1.308s 00:19:15.638 17:38:11 nvme.nvme_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:15.638 17:38:11 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:19:15.638 ************************************ 00:19:15.638 END TEST nvme_identify 00:19:15.638 ************************************ 00:19:15.638 17:38:11 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:15.638 17:38:11 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:19:15.638 17:38:11 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:15.638 17:38:11 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:15.638 17:38:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:15.638 ************************************ 00:19:15.638 START TEST nvme_perf 00:19:15.638 ************************************ 00:19:15.638 17:38:11 nvme.nvme_perf -- common/autotest_common.sh@1123 -- # nvme_perf 00:19:15.638 17:38:11 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:19:16.207 EAL: TSC is not safe to use in SMP mode 00:19:16.207 EAL: TSC is not invariant 00:19:16.207 [2024-07-15 17:38:11.871521] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:17.144 Initializing NVMe Controllers 00:19:17.144 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:17.144 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:19:17.144 Initialization complete. Launching workers. 00:19:17.144 ======================================================== 00:19:17.144 Latency(us) 00:19:17.144 Device Information : IOPS MiB/s Average min max 00:19:17.144 PCIE (0000:00:10.0) NSID 1 from core 0: 84196.00 986.67 1521.97 177.66 4390.60 00:19:17.144 ======================================================== 00:19:17.144 Total : 84196.00 986.67 1521.97 177.66 4390.60 00:19:17.144 00:19:17.144 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:19:17.144 ================================================================================= 00:19:17.144 1.00000% : 1206.460us 00:19:17.144 10.00000% : 1303.275us 00:19:17.144 25.00000% : 1385.195us 00:19:17.144 50.00000% : 1474.562us 00:19:17.144 75.00000% : 1571.377us 00:19:17.144 90.00000% : 1876.715us 00:19:17.144 95.00000% : 2040.556us 00:19:17.144 98.00000% : 2159.712us 00:19:17.144 99.00000% : 2263.974us 00:19:17.144 99.50000% : 2457.604us 00:19:17.144 99.90000% : 2978.913us 00:19:17.144 99.99000% : 3083.175us 00:19:17.144 99.99900% : 4408.792us 00:19:17.144 99.99990% : 4408.792us 00:19:17.144 99.99999% : 4408.792us 00:19:17.144 00:19:17.144 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:19:17.145 ============================================================================== 00:19:17.145 Range in us Cumulative IO count 00:19:17.145 176.873 - 177.804: 0.0012% ( 1) 00:19:17.145 177.804 - 178.735: 0.0024% ( 1) 00:19:17.145 178.735 - 179.666: 0.0036% ( 1) 00:19:17.145 180.597 - 181.528: 0.0048% ( 1) 00:19:17.145 182.458 - 183.389: 0.0059% ( 1) 00:19:17.145 226.211 - 227.142: 0.0071% ( 1) 00:19:17.145 234.589 - 235.520: 0.0095% ( 2) 00:19:17.145 236.451 - 237.382: 0.0107% ( 1) 00:19:17.145 240.175 - 242.037: 0.0143% ( 3) 00:19:17.145 242.037 - 243.899: 0.0166% ( 2) 00:19:17.145 243.899 - 245.760: 0.0178% ( 1) 00:19:17.145 245.760 - 247.622: 0.0226% ( 4) 00:19:17.145 249.484 - 251.346: 0.0238% ( 1) 00:19:17.145 253.208 - 255.069: 0.0249% ( 1) 00:19:17.145 277.411 - 279.273: 0.0261% ( 1) 00:19:17.145 279.273 - 281.135: 0.0285% ( 2) 00:19:17.145 281.135 - 282.997: 0.0297% ( 1) 00:19:17.145 282.997 - 284.859: 0.0309% ( 1) 00:19:17.145 284.859 - 286.720: 0.0333% ( 2) 00:19:17.145 286.720 - 288.582: 0.0344% ( 1) 00:19:17.145 288.582 - 290.444: 0.0368% ( 2) 00:19:17.145 292.306 - 294.168: 0.0416% ( 4) 00:19:17.145 296.030 - 297.891: 0.0439% ( 2) 00:19:17.145 297.891 - 299.753: 0.0451% ( 1) 00:19:17.145 625.572 - 629.295: 0.0463% ( 1) 00:19:17.145 629.295 - 633.019: 0.0487% ( 2) 00:19:17.145 633.019 - 636.743: 0.0523% ( 3) 00:19:17.145 636.743 - 640.466: 0.0546% ( 2) 00:19:17.145 789.412 - 793.136: 0.0582% ( 3) 00:19:17.145 793.136 - 796.859: 0.0606% ( 2) 00:19:17.145 796.859 - 800.583: 0.0618% ( 1) 00:19:17.145 1072.409 - 1079.856: 0.0677% ( 5) 00:19:17.145 1079.856 - 1087.303: 0.0736% ( 5) 00:19:17.145 1087.303 - 1094.751: 0.0796% ( 5) 00:19:17.145 1094.751 - 1102.198: 0.0843% ( 4) 00:19:17.145 1102.198 - 1109.645: 0.0867% ( 2) 00:19:17.145 1109.645 - 1117.093: 0.0938% ( 6) 00:19:17.145 1117.093 - 1124.540: 0.1010% ( 6) 00:19:17.145 1124.540 - 1131.987: 0.1223% ( 18) 00:19:17.145 1131.987 - 1139.434: 0.1473% ( 21) 00:19:17.145 1139.434 - 1146.882: 0.1698% ( 19) 00:19:17.145 1146.882 - 1154.329: 0.2007% ( 26) 00:19:17.145 1154.329 - 1161.776: 0.2411% ( 34) 00:19:17.145 1161.776 - 1169.223: 0.3088% ( 57) 00:19:17.145 1169.223 - 1176.671: 0.4014% ( 78) 00:19:17.145 1176.671 - 1184.118: 0.5143% ( 95) 00:19:17.145 1184.118 - 1191.565: 0.6532% ( 117) 00:19:17.145 1191.565 - 1199.013: 0.8361% ( 154) 00:19:17.145 1199.013 - 1206.460: 1.0844% ( 209) 00:19:17.145 1206.460 - 1213.907: 1.3944% ( 261) 00:19:17.145 1213.907 - 1221.354: 1.8006% ( 342) 00:19:17.145 1221.354 - 1228.802: 2.2673% ( 393) 00:19:17.145 1228.802 - 1236.249: 2.7911% ( 441) 00:19:17.145 1236.249 - 1243.696: 3.3600% ( 479) 00:19:17.145 1243.696 - 1251.144: 3.9610% ( 506) 00:19:17.145 1251.144 - 1258.591: 4.6190% ( 554) 00:19:17.145 1258.591 - 1266.038: 5.3672% ( 630) 00:19:17.145 1266.038 - 1273.485: 6.2034% ( 704) 00:19:17.145 1273.485 - 1280.933: 7.0835% ( 741) 00:19:17.145 1280.933 - 1288.380: 8.0503% ( 814) 00:19:17.145 1288.380 - 1295.827: 9.0859% ( 872) 00:19:17.145 1295.827 - 1303.275: 10.1953% ( 934) 00:19:17.145 1303.275 - 1310.722: 11.3675% ( 987) 00:19:17.145 1310.722 - 1318.169: 12.5659% ( 1009) 00:19:17.145 1318.169 - 1325.616: 13.8486% ( 1080) 00:19:17.145 1325.616 - 1333.064: 15.1432% ( 1090) 00:19:17.145 1333.064 - 1340.511: 16.5400% ( 1176) 00:19:17.145 1340.511 - 1347.958: 17.9854% ( 1217) 00:19:17.145 1347.958 - 1355.406: 19.4427% ( 1227) 00:19:17.145 1355.406 - 1362.853: 20.9095% ( 1235) 00:19:17.145 1362.853 - 1370.300: 22.4393% ( 1288) 00:19:17.145 1370.300 - 1377.747: 24.0712% ( 1374) 00:19:17.145 1377.747 - 1385.195: 25.7708% ( 1431) 00:19:17.145 1385.195 - 1392.642: 27.5144% ( 1468) 00:19:17.145 1392.642 - 1400.089: 29.3862% ( 1576) 00:19:17.145 1400.089 - 1407.537: 31.3222% ( 1630) 00:19:17.145 1407.537 - 1414.984: 33.3151% ( 1678) 00:19:17.145 1414.984 - 1422.431: 35.3615% ( 1723) 00:19:17.145 1422.431 - 1429.878: 37.4602% ( 1767) 00:19:17.145 1429.878 - 1437.326: 39.6622% ( 1854) 00:19:17.145 1437.326 - 1444.773: 41.8963% ( 1881) 00:19:17.145 1444.773 - 1452.220: 44.1220% ( 1874) 00:19:17.145 1452.220 - 1459.668: 46.3977% ( 1916) 00:19:17.145 1459.668 - 1467.115: 48.6721% ( 1915) 00:19:17.145 1467.115 - 1474.562: 50.9454% ( 1914) 00:19:17.145 1474.562 - 1482.009: 53.1878% ( 1888) 00:19:17.145 1482.009 - 1489.457: 55.3874% ( 1852) 00:19:17.145 1489.457 - 1496.904: 57.5835% ( 1849) 00:19:17.145 1496.904 - 1504.351: 59.7404% ( 1816) 00:19:17.145 1504.351 - 1511.799: 61.7785% ( 1716) 00:19:17.145 1511.799 - 1519.246: 63.7619% ( 1670) 00:19:17.145 1519.246 - 1526.693: 65.6029% ( 1550) 00:19:17.145 1526.693 - 1534.140: 67.4438% ( 1550) 00:19:17.145 1534.140 - 1541.588: 69.1731% ( 1456) 00:19:17.145 1541.588 - 1549.035: 70.7967% ( 1367) 00:19:17.145 1549.035 - 1556.482: 72.3027% ( 1268) 00:19:17.145 1556.482 - 1563.930: 73.7256% ( 1198) 00:19:17.145 1563.930 - 1571.377: 75.0261% ( 1095) 00:19:17.145 1571.377 - 1578.824: 76.1806% ( 972) 00:19:17.145 1578.824 - 1586.271: 77.2281% ( 882) 00:19:17.145 1586.271 - 1593.719: 78.1403% ( 768) 00:19:17.145 1593.719 - 1601.166: 78.9325% ( 667) 00:19:17.145 1601.166 - 1608.613: 79.6249% ( 583) 00:19:17.145 1608.613 - 1616.060: 80.2093% ( 492) 00:19:17.145 1616.060 - 1623.508: 80.7509% ( 456) 00:19:17.145 1623.508 - 1630.955: 81.2699% ( 437) 00:19:17.145 1630.955 - 1638.402: 81.7533% ( 407) 00:19:17.145 1638.402 - 1645.850: 82.1892% ( 367) 00:19:17.145 1645.850 - 1653.297: 82.5871% ( 335) 00:19:17.145 1653.297 - 1660.744: 82.9564% ( 311) 00:19:17.145 1660.744 - 1668.191: 83.3175% ( 304) 00:19:17.145 1668.191 - 1675.639: 83.6643% ( 292) 00:19:17.145 1675.639 - 1683.086: 84.0004% ( 283) 00:19:17.145 1683.086 - 1690.533: 84.3152% ( 265) 00:19:17.145 1690.533 - 1697.981: 84.6216% ( 258) 00:19:17.145 1697.981 - 1705.428: 84.9126% ( 245) 00:19:17.145 1705.428 - 1712.875: 85.1798% ( 225) 00:19:17.145 1712.875 - 1720.322: 85.4209% ( 203) 00:19:17.145 1720.322 - 1727.770: 85.6656% ( 206) 00:19:17.145 1727.770 - 1735.217: 85.9328% ( 225) 00:19:17.145 1735.217 - 1742.664: 86.2012% ( 226) 00:19:17.145 1742.664 - 1750.112: 86.4614% ( 219) 00:19:17.145 1750.112 - 1757.559: 86.7333% ( 229) 00:19:17.145 1757.559 - 1765.006: 86.9887% ( 215) 00:19:17.145 1765.006 - 1772.453: 87.2334% ( 206) 00:19:17.145 1772.453 - 1779.901: 87.4721% ( 201) 00:19:17.145 1779.901 - 1787.348: 87.6811% ( 176) 00:19:17.145 1787.348 - 1794.795: 87.8795% ( 167) 00:19:17.145 1794.795 - 1802.243: 88.0885% ( 176) 00:19:17.145 1802.243 - 1809.690: 88.2880% ( 168) 00:19:17.145 1809.690 - 1817.137: 88.4828% ( 164) 00:19:17.145 1817.137 - 1824.584: 88.6835% ( 169) 00:19:17.145 1824.584 - 1832.032: 88.8724% ( 159) 00:19:17.145 1832.032 - 1839.479: 89.0529% ( 152) 00:19:17.145 1839.479 - 1846.926: 89.2370% ( 155) 00:19:17.145 1846.926 - 1854.374: 89.4247% ( 158) 00:19:17.145 1854.374 - 1861.821: 89.6230% ( 167) 00:19:17.145 1861.821 - 1869.268: 89.8249% ( 170) 00:19:17.145 1869.268 - 1876.715: 90.0375% ( 179) 00:19:17.145 1876.715 - 1884.163: 90.2406% ( 171) 00:19:17.145 1884.163 - 1891.610: 90.4425% ( 170) 00:19:17.145 1891.610 - 1899.057: 90.6563% ( 180) 00:19:17.145 1899.057 - 1906.505: 90.8749% ( 184) 00:19:17.145 1906.505 - 1921.399: 91.3452% ( 396) 00:19:17.145 1921.399 - 1936.294: 91.7846% ( 370) 00:19:17.145 1936.294 - 1951.188: 92.2942% ( 429) 00:19:17.145 1951.188 - 1966.083: 92.8084% ( 433) 00:19:17.145 1966.083 - 1980.977: 93.3370% ( 445) 00:19:17.145 1980.977 - 1995.872: 93.8501% ( 432) 00:19:17.145 1995.872 - 2010.767: 94.3287% ( 403) 00:19:17.145 2010.767 - 2025.661: 94.7765% ( 377) 00:19:17.145 2025.661 - 2040.556: 95.2824% ( 426) 00:19:17.145 2040.556 - 2055.450: 95.7706% ( 411) 00:19:17.145 2055.450 - 2070.345: 96.2183% ( 377) 00:19:17.145 2070.345 - 2085.239: 96.6827% ( 391) 00:19:17.145 2085.239 - 2100.134: 97.0794% ( 334) 00:19:17.145 2100.134 - 2115.028: 97.4274% ( 293) 00:19:17.145 2115.028 - 2129.923: 97.7303% ( 255) 00:19:17.145 2129.923 - 2144.818: 97.9809% ( 211) 00:19:17.145 2144.818 - 2159.712: 98.2208% ( 202) 00:19:17.145 2159.712 - 2174.607: 98.4073% ( 157) 00:19:17.145 2174.607 - 2189.501: 98.5712% ( 138) 00:19:17.145 2189.501 - 2204.396: 98.7185% ( 124) 00:19:17.145 2204.396 - 2219.290: 98.8254% ( 90) 00:19:17.145 2219.290 - 2234.185: 98.9097% ( 71) 00:19:17.145 2234.185 - 2249.080: 98.9845% ( 63) 00:19:17.145 2249.080 - 2263.974: 99.0451% ( 51) 00:19:17.145 2263.974 - 2278.869: 99.0962% ( 43) 00:19:17.145 2278.869 - 2293.763: 99.1330% ( 31) 00:19:17.145 2293.763 - 2308.658: 99.1793% ( 39) 00:19:17.145 2308.658 - 2323.552: 99.2161% ( 31) 00:19:17.145 2323.552 - 2338.447: 99.2601% ( 37) 00:19:17.145 2338.447 - 2353.342: 99.3111% ( 43) 00:19:17.145 2353.342 - 2368.236: 99.3515% ( 34) 00:19:17.145 2368.236 - 2383.131: 99.3848% ( 28) 00:19:17.145 2383.131 - 2398.025: 99.4145% ( 25) 00:19:17.145 2398.025 - 2412.920: 99.4406% ( 22) 00:19:17.145 2412.920 - 2427.814: 99.4703% ( 25) 00:19:17.145 2427.814 - 2442.709: 99.4940% ( 20) 00:19:17.145 2442.709 - 2457.604: 99.5130% ( 16) 00:19:17.145 2457.604 - 2472.498: 99.5427% ( 25) 00:19:17.145 2472.498 - 2487.393: 99.5641% ( 18) 00:19:17.145 2487.393 - 2502.287: 99.5819% ( 15) 00:19:17.145 2502.287 - 2517.182: 99.6057% ( 20) 00:19:17.145 2517.182 - 2532.076: 99.6342% ( 24) 00:19:17.145 2532.076 - 2546.971: 99.6520% ( 15) 00:19:17.145 2546.971 - 2561.865: 99.6663% ( 12) 00:19:17.145 2561.865 - 2576.760: 99.6817% ( 13) 00:19:17.145 2576.760 - 2591.655: 99.6995% ( 15) 00:19:17.145 2591.655 - 2606.549: 99.7185% ( 16) 00:19:17.145 2606.549 - 2621.444: 99.7292% ( 9) 00:19:17.145 2621.444 - 2636.338: 99.7470% ( 15) 00:19:17.145 2636.338 - 2651.233: 99.7589% ( 10) 00:19:17.145 2651.233 - 2666.127: 99.7708% ( 10) 00:19:17.145 2666.127 - 2681.022: 99.7720% ( 1) 00:19:17.145 2681.022 - 2695.917: 99.7731% ( 1) 00:19:17.145 2725.706 - 2740.600: 99.7779% ( 4) 00:19:17.146 2740.600 - 2755.495: 99.7850% ( 6) 00:19:17.146 2755.495 - 2770.389: 99.7933% ( 7) 00:19:17.146 2770.389 - 2785.284: 99.8017% ( 7) 00:19:17.146 2785.284 - 2800.179: 99.8100% ( 7) 00:19:17.146 2800.179 - 2815.073: 99.8135% ( 3) 00:19:17.146 2829.968 - 2844.862: 99.8183% ( 4) 00:19:17.146 2844.862 - 2859.757: 99.8266% ( 7) 00:19:17.146 2859.757 - 2874.651: 99.8349% ( 7) 00:19:17.146 2874.651 - 2889.546: 99.8432% ( 7) 00:19:17.146 2889.546 - 2904.441: 99.8456% ( 2) 00:19:17.146 2904.441 - 2919.335: 99.8527% ( 6) 00:19:17.146 2919.335 - 2934.230: 99.8646% ( 10) 00:19:17.146 2934.230 - 2949.124: 99.8812% ( 14) 00:19:17.146 2949.124 - 2964.019: 99.8990% ( 15) 00:19:17.146 2964.019 - 2978.913: 99.9133% ( 12) 00:19:17.146 2978.913 - 2993.808: 99.9311% ( 15) 00:19:17.146 2993.808 - 3008.702: 99.9489% ( 15) 00:19:17.146 3008.702 - 3023.597: 99.9632% ( 12) 00:19:17.146 3023.597 - 3038.492: 99.9727% ( 8) 00:19:17.146 3038.492 - 3053.386: 99.9810% ( 7) 00:19:17.146 3053.386 - 3068.281: 99.9869% ( 5) 00:19:17.146 3068.281 - 3083.175: 99.9917% ( 4) 00:19:17.146 3083.175 - 3098.070: 99.9952% ( 3) 00:19:17.146 3321.488 - 3336.383: 99.9976% ( 2) 00:19:17.146 3813.009 - 3842.798: 99.9988% ( 1) 00:19:17.146 4379.003 - 4408.792: 100.0000% ( 1) 00:19:17.146 00:19:17.146 17:38:12 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:19:17.711 EAL: TSC is not safe to use in SMP mode 00:19:17.711 EAL: TSC is not invariant 00:19:17.711 [2024-07-15 17:38:13.518949] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:19.123 Initializing NVMe Controllers 00:19:19.123 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:19:19.123 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:19:19.123 Initialization complete. Launching workers. 00:19:19.123 ======================================================== 00:19:19.123 Latency(us) 00:19:19.123 Device Information : IOPS MiB/s Average min max 00:19:19.123 PCIE (0000:00:10.0) NSID 1 from core 0: 71789.41 841.28 1783.16 246.23 4776.24 00:19:19.123 ======================================================== 00:19:19.123 Total : 71789.41 841.28 1783.16 246.23 4776.24 00:19:19.123 00:19:19.123 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:19:19.123 ================================================================================= 00:19:19.123 1.00000% : 1295.827us 00:19:19.123 10.00000% : 1474.562us 00:19:19.123 25.00000% : 1578.824us 00:19:19.123 50.00000% : 1765.006us 00:19:19.123 75.00000% : 1951.188us 00:19:19.123 90.00000% : 2115.028us 00:19:19.123 95.00000% : 2219.290us 00:19:19.123 98.00000% : 2398.025us 00:19:19.123 99.00000% : 2681.022us 00:19:19.123 99.50000% : 3053.386us 00:19:19.123 99.90000% : 3574.696us 00:19:19.123 99.99000% : 4230.057us 00:19:19.123 99.99900% : 4796.051us 00:19:19.123 99.99990% : 4796.051us 00:19:19.123 99.99999% : 4796.051us 00:19:19.123 00:19:19.124 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:19:19.124 ============================================================================== 00:19:19.124 Range in us Cumulative IO count 00:19:19.124 245.760 - 247.622: 0.0028% ( 2) 00:19:19.124 247.622 - 249.484: 0.0042% ( 1) 00:19:19.124 249.484 - 251.346: 0.0056% ( 1) 00:19:19.124 251.346 - 253.208: 0.0070% ( 1) 00:19:19.124 266.240 - 268.102: 0.0097% ( 2) 00:19:19.124 268.102 - 269.964: 0.0111% ( 1) 00:19:19.124 269.964 - 271.826: 0.0125% ( 1) 00:19:19.124 301.615 - 303.477: 0.0181% ( 4) 00:19:19.124 517.586 - 521.310: 0.0223% ( 3) 00:19:19.124 521.310 - 525.033: 0.0292% ( 5) 00:19:19.124 525.033 - 528.757: 0.0306% ( 1) 00:19:19.124 528.757 - 532.481: 0.0334% ( 2) 00:19:19.124 532.481 - 536.204: 0.0362% ( 2) 00:19:19.124 621.848 - 625.572: 0.0376% ( 1) 00:19:19.124 625.572 - 629.295: 0.0390% ( 1) 00:19:19.124 673.979 - 677.703: 0.0404% ( 1) 00:19:19.124 737.281 - 741.005: 0.0418% ( 1) 00:19:19.124 759.623 - 763.347: 0.0432% ( 1) 00:19:19.124 763.347 - 767.070: 0.0459% ( 2) 00:19:19.124 767.070 - 770.794: 0.0501% ( 3) 00:19:19.124 770.794 - 774.517: 0.0543% ( 3) 00:19:19.124 774.517 - 778.241: 0.0640% ( 7) 00:19:19.124 778.241 - 781.965: 0.0654% ( 1) 00:19:19.124 781.965 - 785.688: 0.0682% ( 2) 00:19:19.124 785.688 - 789.412: 0.0696% ( 1) 00:19:19.124 789.412 - 793.136: 0.0724% ( 2) 00:19:19.124 793.136 - 796.859: 0.0738% ( 1) 00:19:19.124 916.016 - 919.740: 0.0793% ( 4) 00:19:19.124 919.740 - 923.463: 0.0835% ( 3) 00:19:19.124 923.463 - 927.187: 0.0849% ( 1) 00:19:19.124 927.187 - 930.910: 0.0891% ( 3) 00:19:19.124 930.910 - 934.634: 0.0919% ( 2) 00:19:19.124 934.634 - 938.358: 0.0933% ( 1) 00:19:19.124 938.358 - 942.081: 0.0947% ( 1) 00:19:19.124 942.081 - 945.805: 0.0961% ( 1) 00:19:19.124 945.805 - 949.529: 0.0974% ( 1) 00:19:19.124 949.529 - 953.252: 0.1002% ( 2) 00:19:19.124 953.252 - 960.700: 0.1030% ( 2) 00:19:19.124 960.700 - 968.147: 0.1072% ( 3) 00:19:19.124 968.147 - 975.594: 0.1100% ( 2) 00:19:19.124 975.594 - 983.041: 0.1114% ( 1) 00:19:19.124 990.489 - 997.936: 0.1155% ( 3) 00:19:19.124 997.936 - 1005.383: 0.1267% ( 8) 00:19:19.124 1005.383 - 1012.831: 0.1336% ( 5) 00:19:19.124 1012.831 - 1020.278: 0.1434% ( 7) 00:19:19.124 1020.278 - 1027.725: 0.1531% ( 7) 00:19:19.124 1027.725 - 1035.172: 0.1615% ( 6) 00:19:19.124 1035.172 - 1042.620: 0.1754% ( 10) 00:19:19.124 1042.620 - 1050.067: 0.1851% ( 7) 00:19:19.124 1050.067 - 1057.514: 0.1921% ( 5) 00:19:19.124 1057.514 - 1064.962: 0.2018% ( 7) 00:19:19.124 1064.962 - 1072.409: 0.2199% ( 13) 00:19:19.124 1072.409 - 1079.856: 0.2436% ( 17) 00:19:19.124 1079.856 - 1087.303: 0.2687% ( 18) 00:19:19.124 1087.303 - 1094.751: 0.2923% ( 17) 00:19:19.124 1094.751 - 1102.198: 0.3216% ( 21) 00:19:19.124 1102.198 - 1109.645: 0.3578% ( 26) 00:19:19.124 1109.645 - 1117.093: 0.3786% ( 15) 00:19:19.124 1117.093 - 1124.540: 0.4065% ( 20) 00:19:19.124 1124.540 - 1131.987: 0.4204% ( 10) 00:19:19.124 1131.987 - 1139.434: 0.4315% ( 8) 00:19:19.124 1139.434 - 1146.882: 0.4455% ( 10) 00:19:19.124 1146.882 - 1154.329: 0.4566% ( 8) 00:19:19.124 1154.329 - 1161.776: 0.4649% ( 6) 00:19:19.124 1161.776 - 1169.223: 0.5025% ( 27) 00:19:19.124 1169.223 - 1176.671: 0.5123% ( 7) 00:19:19.124 1176.671 - 1184.118: 0.5276% ( 11) 00:19:19.124 1184.118 - 1191.565: 0.5471% ( 14) 00:19:19.124 1191.565 - 1199.013: 0.5652% ( 13) 00:19:19.124 1199.013 - 1206.460: 0.5833% ( 13) 00:19:19.124 1206.460 - 1213.907: 0.6139% ( 22) 00:19:19.124 1213.907 - 1221.354: 0.6390% ( 18) 00:19:19.124 1221.354 - 1228.802: 0.6640% ( 18) 00:19:19.124 1228.802 - 1236.249: 0.7030% ( 28) 00:19:19.124 1236.249 - 1243.696: 0.7489% ( 33) 00:19:19.124 1243.696 - 1251.144: 0.7809% ( 23) 00:19:19.124 1251.144 - 1258.591: 0.8088% ( 20) 00:19:19.124 1258.591 - 1266.038: 0.8478% ( 28) 00:19:19.124 1266.038 - 1273.485: 0.8881% ( 29) 00:19:19.124 1273.485 - 1280.933: 0.9299% ( 30) 00:19:19.124 1280.933 - 1288.380: 0.9884% ( 42) 00:19:19.124 1288.380 - 1295.827: 1.0858% ( 70) 00:19:19.124 1295.827 - 1303.275: 1.2055% ( 86) 00:19:19.124 1303.275 - 1310.722: 1.3573% ( 109) 00:19:19.124 1310.722 - 1318.169: 1.5076% ( 108) 00:19:19.124 1318.169 - 1325.616: 1.6524% ( 104) 00:19:19.124 1325.616 - 1333.064: 1.7860% ( 96) 00:19:19.124 1333.064 - 1340.511: 1.9085% ( 88) 00:19:19.124 1340.511 - 1347.958: 2.0630% ( 111) 00:19:19.124 1347.958 - 1355.406: 2.2704% ( 149) 00:19:19.124 1355.406 - 1362.853: 2.4848% ( 154) 00:19:19.124 1362.853 - 1370.300: 2.7382% ( 182) 00:19:19.124 1370.300 - 1377.747: 3.0027% ( 190) 00:19:19.124 1377.747 - 1385.195: 3.3006% ( 214) 00:19:19.124 1385.195 - 1392.642: 3.6138% ( 225) 00:19:19.124 1392.642 - 1400.089: 3.9785% ( 262) 00:19:19.124 1400.089 - 1407.537: 4.4448% ( 335) 00:19:19.124 1407.537 - 1414.984: 4.9335% ( 351) 00:19:19.124 1414.984 - 1422.431: 5.4847% ( 396) 00:19:19.124 1422.431 - 1429.878: 5.9719% ( 350) 00:19:19.124 1429.878 - 1437.326: 6.4967% ( 377) 00:19:19.124 1437.326 - 1444.773: 7.1273% ( 453) 00:19:19.124 1444.773 - 1452.220: 7.8540% ( 522) 00:19:19.124 1452.220 - 1459.668: 8.6391% ( 564) 00:19:19.124 1459.668 - 1467.115: 9.5175% ( 631) 00:19:19.124 1467.115 - 1474.562: 10.2901% ( 555) 00:19:19.124 1474.562 - 1482.009: 11.2270% ( 673) 00:19:19.124 1482.009 - 1489.457: 12.1193% ( 641) 00:19:19.124 1489.457 - 1496.904: 13.0645% ( 679) 00:19:19.124 1496.904 - 1504.351: 13.9693% ( 650) 00:19:19.124 1504.351 - 1511.799: 14.9605% ( 712) 00:19:19.124 1511.799 - 1519.246: 15.9252% ( 693) 00:19:19.124 1519.246 - 1526.693: 17.0235% ( 789) 00:19:19.124 1526.693 - 1534.140: 18.0912% ( 767) 00:19:19.124 1534.140 - 1541.588: 19.2104% ( 804) 00:19:19.124 1541.588 - 1549.035: 20.3798% ( 840) 00:19:19.124 1549.035 - 1556.482: 21.6785% ( 933) 00:19:19.124 1556.482 - 1563.930: 22.9551% ( 917) 00:19:19.124 1563.930 - 1571.377: 24.2344% ( 919) 00:19:19.124 1571.377 - 1578.824: 25.5178% ( 922) 00:19:19.124 1578.824 - 1586.271: 26.7039% ( 852) 00:19:19.124 1586.271 - 1593.719: 27.8635% ( 833) 00:19:19.124 1593.719 - 1601.166: 28.8657% ( 720) 00:19:19.124 1601.166 - 1608.613: 29.8917% ( 737) 00:19:19.124 1608.613 - 1616.060: 30.8940% ( 720) 00:19:19.124 1616.060 - 1623.508: 31.8141% ( 661) 00:19:19.124 1623.508 - 1630.955: 32.6452% ( 597) 00:19:19.124 1630.955 - 1638.402: 33.5974% ( 684) 00:19:19.124 1638.402 - 1645.850: 34.4716% ( 628) 00:19:19.124 1645.850 - 1653.297: 35.3263% ( 614) 00:19:19.124 1653.297 - 1660.744: 36.1922% ( 622) 00:19:19.124 1660.744 - 1668.191: 36.9982% ( 579) 00:19:19.124 1668.191 - 1675.639: 37.8974% ( 646) 00:19:19.125 1675.639 - 1683.086: 38.8176% ( 661) 00:19:19.125 1683.086 - 1690.533: 39.7391% ( 662) 00:19:19.125 1690.533 - 1697.981: 40.7456% ( 723) 00:19:19.125 1697.981 - 1705.428: 41.7423% ( 716) 00:19:19.125 1705.428 - 1712.875: 42.6819% ( 675) 00:19:19.125 1712.875 - 1720.322: 43.7302% ( 753) 00:19:19.125 1720.322 - 1727.770: 44.7645% ( 743) 00:19:19.125 1727.770 - 1735.217: 45.8377% ( 771) 00:19:19.125 1735.217 - 1742.664: 46.9027% ( 765) 00:19:19.125 1742.664 - 1750.112: 47.9968% ( 786) 00:19:19.125 1750.112 - 1757.559: 49.0645% ( 767) 00:19:19.125 1757.559 - 1765.006: 50.0654% ( 719) 00:19:19.125 1765.006 - 1772.453: 51.0942% ( 739) 00:19:19.125 1772.453 - 1779.901: 52.0964% ( 720) 00:19:19.125 1779.901 - 1787.348: 53.0570% ( 690) 00:19:19.125 1787.348 - 1794.795: 54.1316% ( 772) 00:19:19.125 1794.795 - 1802.243: 55.2133% ( 777) 00:19:19.125 1802.243 - 1809.690: 56.3255% ( 799) 00:19:19.125 1809.690 - 1817.137: 57.3417% ( 730) 00:19:19.125 1817.137 - 1824.584: 58.3816% ( 747) 00:19:19.125 1824.584 - 1832.032: 59.5036% ( 806) 00:19:19.125 1832.032 - 1839.479: 60.6548% ( 827) 00:19:19.125 1839.479 - 1846.926: 61.7880% ( 814) 00:19:19.125 1846.926 - 1854.374: 62.9364% ( 825) 00:19:19.125 1854.374 - 1861.821: 64.0556% ( 804) 00:19:19.125 1861.821 - 1869.268: 65.1498% ( 786) 00:19:19.125 1869.268 - 1876.715: 66.2467% ( 788) 00:19:19.125 1876.715 - 1884.163: 67.3729% ( 809) 00:19:19.125 1884.163 - 1891.610: 68.4545% ( 777) 00:19:19.125 1891.610 - 1899.057: 69.5222% ( 767) 00:19:19.125 1899.057 - 1906.505: 70.5496% ( 738) 00:19:19.125 1906.505 - 1921.399: 72.5110% ( 1409) 00:19:19.125 1921.399 - 1936.294: 74.2873% ( 1276) 00:19:19.125 1936.294 - 1951.188: 76.0844% ( 1291) 00:19:19.125 1951.188 - 1966.083: 77.7201% ( 1175) 00:19:19.125 1966.083 - 1980.977: 79.2681% ( 1112) 00:19:19.125 1980.977 - 1995.872: 80.7200% ( 1043) 00:19:19.125 1995.872 - 2010.767: 82.1371% ( 1018) 00:19:19.125 2010.767 - 2025.661: 83.4025% ( 909) 00:19:19.125 2025.661 - 2040.556: 84.6846% ( 921) 00:19:19.125 2040.556 - 2055.450: 85.9374% ( 900) 00:19:19.125 2055.450 - 2070.345: 87.1221% ( 851) 00:19:19.125 2070.345 - 2085.239: 88.2204% ( 789) 00:19:19.125 2085.239 - 2100.134: 89.2310% ( 726) 00:19:19.125 2100.134 - 2115.028: 90.1929% ( 691) 00:19:19.125 2115.028 - 2129.923: 91.0351% ( 605) 00:19:19.125 2129.923 - 2144.818: 91.8133% ( 559) 00:19:19.125 2144.818 - 2159.712: 92.5678% ( 542) 00:19:19.125 2159.712 - 2174.607: 93.3306% ( 548) 00:19:19.125 2174.607 - 2189.501: 94.0392% ( 509) 00:19:19.125 2189.501 - 2204.396: 94.6336% ( 427) 00:19:19.125 2204.396 - 2219.290: 95.1570% ( 376) 00:19:19.125 2219.290 - 2234.185: 95.6609% ( 362) 00:19:19.125 2234.185 - 2249.080: 96.0939% ( 311) 00:19:19.125 2249.080 - 2263.974: 96.4489% ( 255) 00:19:19.125 2263.974 - 2278.869: 96.7885% ( 244) 00:19:19.125 2278.869 - 2293.763: 97.0293% ( 173) 00:19:19.125 2293.763 - 2308.658: 97.2354% ( 148) 00:19:19.125 2308.658 - 2323.552: 97.4052% ( 122) 00:19:19.125 2323.552 - 2338.447: 97.5931% ( 135) 00:19:19.125 2338.447 - 2353.342: 97.7407% ( 106) 00:19:19.125 2353.342 - 2368.236: 97.8507% ( 79) 00:19:19.125 2368.236 - 2383.131: 97.9272% ( 55) 00:19:19.125 2383.131 - 2398.025: 98.0038% ( 55) 00:19:19.125 2398.025 - 2412.920: 98.0984% ( 68) 00:19:19.125 2412.920 - 2427.814: 98.1708% ( 52) 00:19:19.125 2427.814 - 2442.709: 98.2223% ( 37) 00:19:19.125 2442.709 - 2457.604: 98.2808% ( 42) 00:19:19.125 2457.604 - 2472.498: 98.3323% ( 37) 00:19:19.125 2472.498 - 2487.393: 98.3769% ( 32) 00:19:19.125 2487.393 - 2502.287: 98.4158% ( 28) 00:19:19.125 2502.287 - 2517.182: 98.4353% ( 14) 00:19:19.125 2517.182 - 2532.076: 98.5077% ( 52) 00:19:19.125 2532.076 - 2546.971: 98.5662% ( 42) 00:19:19.125 2546.971 - 2561.865: 98.6511% ( 61) 00:19:19.125 2561.865 - 2576.760: 98.7137% ( 45) 00:19:19.125 2576.760 - 2591.655: 98.7708% ( 41) 00:19:19.125 2591.655 - 2606.549: 98.8140% ( 31) 00:19:19.125 2606.549 - 2621.444: 98.8669% ( 38) 00:19:19.125 2621.444 - 2636.338: 98.9072% ( 29) 00:19:19.125 2636.338 - 2651.233: 98.9393% ( 23) 00:19:19.125 2651.233 - 2666.127: 98.9921% ( 38) 00:19:19.125 2666.127 - 2681.022: 99.0395% ( 34) 00:19:19.125 2681.022 - 2695.917: 99.0687% ( 21) 00:19:19.125 2695.917 - 2710.811: 99.0854% ( 12) 00:19:19.125 2710.811 - 2725.706: 99.0966% ( 8) 00:19:19.125 2725.706 - 2740.600: 99.1091% ( 9) 00:19:19.125 2740.600 - 2755.495: 99.1439% ( 25) 00:19:19.125 2755.495 - 2770.389: 99.1787% ( 25) 00:19:19.125 2770.389 - 2785.284: 99.2330% ( 39) 00:19:19.125 2785.284 - 2800.179: 99.2706% ( 27) 00:19:19.125 2800.179 - 2815.073: 99.2998% ( 21) 00:19:19.125 2815.073 - 2829.968: 99.3416% ( 30) 00:19:19.125 2829.968 - 2844.862: 99.3805% ( 28) 00:19:19.125 2844.862 - 2859.757: 99.4223% ( 30) 00:19:19.125 2859.757 - 2874.651: 99.4418% ( 14) 00:19:19.125 2874.651 - 2889.546: 99.4557% ( 10) 00:19:19.125 2889.546 - 2904.441: 99.4668% ( 8) 00:19:19.384 2949.124 - 2964.019: 99.4682% ( 1) 00:19:19.384 2964.019 - 2978.913: 99.4752% ( 5) 00:19:19.384 2978.913 - 2993.808: 99.4835% ( 6) 00:19:19.384 2993.808 - 3008.702: 99.4877% ( 3) 00:19:19.384 3008.702 - 3023.597: 99.4947% ( 5) 00:19:19.384 3023.597 - 3038.492: 99.4989% ( 3) 00:19:19.384 3038.492 - 3053.386: 99.5086% ( 7) 00:19:19.384 3053.386 - 3068.281: 99.5267% ( 13) 00:19:19.384 3068.281 - 3083.175: 99.5643% ( 27) 00:19:19.384 3083.175 - 3098.070: 99.5977% ( 24) 00:19:19.384 3098.070 - 3112.964: 99.6172% ( 14) 00:19:19.384 3112.964 - 3127.859: 99.6311% ( 10) 00:19:19.384 3127.859 - 3142.754: 99.6506% ( 14) 00:19:19.384 3142.754 - 3157.648: 99.6743% ( 17) 00:19:19.384 3157.648 - 3172.543: 99.6910% ( 12) 00:19:19.384 3172.543 - 3187.437: 99.7091% ( 13) 00:19:19.384 3187.437 - 3202.332: 99.7285% ( 14) 00:19:19.384 3202.332 - 3217.226: 99.7633% ( 25) 00:19:19.384 3217.226 - 3232.121: 99.7856% ( 16) 00:19:19.384 3232.121 - 3247.016: 99.7926% ( 5) 00:19:19.384 3247.016 - 3261.910: 99.8079% ( 11) 00:19:19.384 3261.910 - 3276.805: 99.8288% ( 15) 00:19:19.384 3276.805 - 3291.699: 99.8427% ( 10) 00:19:19.384 3291.699 - 3306.594: 99.8441% ( 1) 00:19:19.384 3306.594 - 3321.488: 99.8455% ( 1) 00:19:19.384 3366.172 - 3381.067: 99.8524% ( 5) 00:19:19.384 3381.067 - 3395.961: 99.8552% ( 2) 00:19:19.384 3425.750 - 3440.645: 99.8608% ( 4) 00:19:19.384 3440.645 - 3455.540: 99.8650% ( 3) 00:19:19.384 3455.540 - 3470.434: 99.8747% ( 7) 00:19:19.384 3470.434 - 3485.329: 99.8775% ( 2) 00:19:19.384 3485.329 - 3500.223: 99.8845% ( 5) 00:19:19.384 3515.118 - 3530.012: 99.8900% ( 4) 00:19:19.384 3530.012 - 3544.907: 99.8970% ( 5) 00:19:19.384 3544.907 - 3559.801: 99.8998% ( 2) 00:19:19.384 3559.801 - 3574.696: 99.9026% ( 2) 00:19:19.384 3589.591 - 3604.485: 99.9039% ( 1) 00:19:19.384 3604.485 - 3619.380: 99.9137% ( 7) 00:19:19.384 3619.380 - 3634.274: 99.9165% ( 2) 00:19:19.384 3634.274 - 3649.169: 99.9179% ( 1) 00:19:19.384 3649.169 - 3664.063: 99.9193% ( 1) 00:19:19.384 3708.747 - 3723.642: 99.9248% ( 4) 00:19:19.384 3723.642 - 3738.536: 99.9262% ( 1) 00:19:19.384 3753.431 - 3768.325: 99.9276% ( 1) 00:19:19.384 3798.115 - 3813.009: 99.9304% ( 2) 00:19:19.384 3872.587 - 3902.377: 99.9415% ( 8) 00:19:19.384 3902.377 - 3932.166: 99.9485% ( 5) 00:19:19.384 3932.166 - 3961.955: 99.9624% ( 10) 00:19:19.384 3961.955 - 3991.744: 99.9736% ( 8) 00:19:19.384 3991.744 - 4021.533: 99.9777% ( 3) 00:19:19.384 4110.900 - 4140.690: 99.9791% ( 1) 00:19:19.384 4140.690 - 4170.479: 99.9805% ( 1) 00:19:19.384 4170.479 - 4200.268: 99.9847% ( 3) 00:19:19.384 4200.268 - 4230.057: 99.9958% ( 8) 00:19:19.384 4498.159 - 4527.948: 99.9972% ( 1) 00:19:19.384 4766.261 - 4796.051: 100.0000% ( 2) 00:19:19.384 00:19:19.384 17:38:15 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:19:19.384 00:19:19.384 real 0m3.802s 00:19:19.384 user 0m2.561s 00:19:19.384 sys 0m1.239s 00:19:19.384 ************************************ 00:19:19.384 END TEST nvme_perf 00:19:19.384 ************************************ 00:19:19.384 17:38:15 nvme.nvme_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:19.384 17:38:15 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:19:19.384 17:38:15 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:19.384 17:38:15 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:19:19.384 17:38:15 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:19:19.384 17:38:15 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:19.384 17:38:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:19.384 ************************************ 00:19:19.384 START TEST nvme_hello_world 00:19:19.384 ************************************ 00:19:19.384 17:38:15 nvme.nvme_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:19:19.957 EAL: TSC is not safe to use in SMP mode 00:19:19.957 EAL: TSC is not invariant 00:19:19.957 [2024-07-15 17:38:15.737778] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:19.957 Initializing NVMe Controllers 00:19:19.957 Attaching to 0000:00:10.0 00:19:19.957 Attached to 0000:00:10.0 00:19:19.957 Namespace ID: 1 size: 5GB 00:19:19.957 Initialization complete. 00:19:19.957 INFO: using host memory buffer for IO 00:19:19.957 Hello world! 00:19:19.957 00:19:19.957 real 0m0.646s 00:19:19.957 user 0m0.022s 00:19:19.957 sys 0m0.624s 00:19:19.957 17:38:15 nvme.nvme_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:19.957 17:38:15 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:19.957 ************************************ 00:19:19.957 END TEST nvme_hello_world 00:19:19.957 ************************************ 00:19:20.216 17:38:15 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:20.216 17:38:15 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:19:20.216 17:38:15 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:20.216 17:38:15 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:20.216 17:38:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:20.216 ************************************ 00:19:20.216 START TEST nvme_sgl 00:19:20.216 ************************************ 00:19:20.216 17:38:15 nvme.nvme_sgl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:19:20.783 EAL: TSC is not safe to use in SMP mode 00:19:20.783 EAL: TSC is not invariant 00:19:20.783 [2024-07-15 17:38:16.384007] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:20.783 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:19:20.783 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:19:20.783 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:19:20.783 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:19:20.783 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:19:20.783 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:19:20.783 NVMe Readv/Writev Request test 00:19:20.783 Attaching to 0000:00:10.0 00:19:20.783 Attached to 0000:00:10.0 00:19:20.783 0000:00:10.0: build_io_request_2 test passed 00:19:20.783 0000:00:10.0: build_io_request_4 test passed 00:19:20.783 0000:00:10.0: build_io_request_5 test passed 00:19:20.783 0000:00:10.0: build_io_request_6 test passed 00:19:20.783 0000:00:10.0: build_io_request_7 test passed 00:19:20.783 0000:00:10.0: build_io_request_10 test passed 00:19:20.783 Cleaning up... 00:19:20.783 00:19:20.783 real 0m0.611s 00:19:20.783 user 0m0.021s 00:19:20.783 sys 0m0.590s 00:19:20.783 17:38:16 nvme.nvme_sgl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:20.783 17:38:16 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:19:20.783 ************************************ 00:19:20.783 END TEST nvme_sgl 00:19:20.783 ************************************ 00:19:20.783 17:38:16 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:20.783 17:38:16 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:19:20.783 17:38:16 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:20.783 17:38:16 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:20.783 17:38:16 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:20.783 ************************************ 00:19:20.783 START TEST nvme_e2edp 00:19:20.783 ************************************ 00:19:20.783 17:38:16 nvme.nvme_e2edp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:19:21.363 EAL: TSC is not safe to use in SMP mode 00:19:21.363 EAL: TSC is not invariant 00:19:21.363 [2024-07-15 17:38:17.059109] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:21.363 NVMe Write/Read with End-to-End data protection test 00:19:21.363 Attaching to 0000:00:10.0 00:19:21.363 Attached to 0000:00:10.0 00:19:21.363 Cleaning up... 00:19:21.363 00:19:21.363 real 0m0.623s 00:19:21.363 user 0m0.000s 00:19:21.363 sys 0m0.624s 00:19:21.363 17:38:17 nvme.nvme_e2edp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:21.364 17:38:17 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:19:21.364 ************************************ 00:19:21.364 END TEST nvme_e2edp 00:19:21.364 ************************************ 00:19:21.364 17:38:17 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:21.364 17:38:17 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:19:21.364 17:38:17 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:21.364 17:38:17 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:21.364 17:38:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:21.364 ************************************ 00:19:21.364 START TEST nvme_reserve 00:19:21.364 ************************************ 00:19:21.364 17:38:17 nvme.nvme_reserve -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:19:21.931 EAL: TSC is not safe to use in SMP mode 00:19:21.931 EAL: TSC is not invariant 00:19:21.931 [2024-07-15 17:38:17.730660] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:22.203 ===================================================== 00:19:22.203 NVMe Controller at PCI bus 0, device 16, function 0 00:19:22.203 ===================================================== 00:19:22.203 Reservations: Not Supported 00:19:22.203 Reservation test passed 00:19:22.203 00:19:22.203 real 0m0.628s 00:19:22.203 user 0m0.021s 00:19:22.203 sys 0m0.606s 00:19:22.203 17:38:17 nvme.nvme_reserve -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:22.203 17:38:17 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:19:22.203 ************************************ 00:19:22.203 END TEST nvme_reserve 00:19:22.203 ************************************ 00:19:22.203 17:38:17 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:22.203 17:38:17 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:19:22.203 17:38:17 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:22.203 17:38:17 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:22.203 17:38:17 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:22.203 ************************************ 00:19:22.203 START TEST nvme_err_injection 00:19:22.203 ************************************ 00:19:22.203 17:38:17 nvme.nvme_err_injection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:19:22.770 EAL: TSC is not safe to use in SMP mode 00:19:22.770 EAL: TSC is not invariant 00:19:22.770 [2024-07-15 17:38:18.389142] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:22.770 NVMe Error Injection test 00:19:22.770 Attaching to 0000:00:10.0 00:19:22.770 Attached to 0000:00:10.0 00:19:22.770 0000:00:10.0: get features failed as expected 00:19:22.771 0000:00:10.0: get features successfully as expected 00:19:22.771 0000:00:10.0: read failed as expected 00:19:22.771 0000:00:10.0: read successfully as expected 00:19:22.771 Cleaning up... 00:19:22.771 00:19:22.771 real 0m0.620s 00:19:22.771 user 0m0.002s 00:19:22.771 sys 0m0.613s 00:19:22.771 17:38:18 nvme.nvme_err_injection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:22.771 17:38:18 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:19:22.771 ************************************ 00:19:22.771 END TEST nvme_err_injection 00:19:22.771 ************************************ 00:19:22.771 17:38:18 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:22.771 17:38:18 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:19:22.771 17:38:18 nvme -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:19:22.771 17:38:18 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:22.771 17:38:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:22.771 ************************************ 00:19:22.771 START TEST nvme_overhead 00:19:22.771 ************************************ 00:19:22.771 17:38:18 nvme.nvme_overhead -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:19:23.338 EAL: TSC is not safe to use in SMP mode 00:19:23.338 EAL: TSC is not invariant 00:19:23.339 [2024-07-15 17:38:19.062825] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:24.273 Initializing NVMe Controllers 00:19:24.273 Attaching to 0000:00:10.0 00:19:24.273 Attached to 0000:00:10.0 00:19:24.273 Initialization complete. Launching workers. 00:19:24.273 submit (in ns) avg, min, max = 9983.8, 8416.8, 56331.0 00:19:24.273 complete (in ns) avg, min, max = 7095.8, 5838.2, 118003.8 00:19:24.273 00:19:24.273 Submit histogram 00:19:24.273 ================ 00:19:24.273 Range in us Cumulative Count 00:19:24.273 8.378 - 8.436: 0.0086% ( 1) 00:19:24.273 8.669 - 8.727: 0.0257% ( 2) 00:19:24.273 8.727 - 8.785: 0.0856% ( 7) 00:19:24.273 8.785 - 8.844: 0.2655% ( 21) 00:19:24.273 8.844 - 8.902: 0.5139% ( 29) 00:19:24.273 8.902 - 8.960: 0.8136% ( 35) 00:19:24.273 8.960 - 9.018: 1.8671% ( 123) 00:19:24.273 9.018 - 9.076: 3.4943% ( 190) 00:19:24.273 9.076 - 9.135: 4.9846% ( 174) 00:19:24.273 9.135 - 9.193: 9.5923% ( 538) 00:19:24.273 9.193 - 9.251: 26.5844% ( 1984) 00:19:24.273 9.251 - 9.309: 49.7002% ( 2699) 00:19:24.273 9.309 - 9.367: 63.4978% ( 1611) 00:19:24.273 9.367 - 9.425: 68.2682% ( 557) 00:19:24.273 9.425 - 9.484: 70.3923% ( 248) 00:19:24.273 9.484 - 9.542: 71.4800% ( 127) 00:19:24.273 9.542 - 9.600: 72.1737% ( 81) 00:19:24.273 9.600 - 9.658: 72.8760% ( 82) 00:19:24.273 9.658 - 9.716: 73.3642% ( 57) 00:19:24.273 9.716 - 9.775: 73.7239% ( 42) 00:19:24.273 9.775 - 9.833: 73.9723% ( 29) 00:19:24.273 9.833 - 9.891: 74.1435% ( 20) 00:19:24.273 9.891 - 9.949: 74.3491% ( 24) 00:19:24.273 9.949 - 10.007: 74.4947% ( 17) 00:19:24.273 10.007 - 10.065: 74.6317% ( 16) 00:19:24.273 10.065 - 10.124: 74.7773% ( 17) 00:19:24.273 10.124 - 10.182: 74.8801% ( 12) 00:19:24.273 10.182 - 10.240: 76.2162% ( 156) 00:19:24.273 10.240 - 10.298: 78.6742% ( 287) 00:19:24.273 10.298 - 10.356: 80.1302% ( 170) 00:19:24.273 10.356 - 10.415: 81.2778% ( 134) 00:19:24.273 10.415 - 10.473: 82.0315% ( 88) 00:19:24.273 10.473 - 10.531: 82.3227% ( 34) 00:19:24.273 10.531 - 10.589: 82.5968% ( 32) 00:19:24.273 10.589 - 10.647: 82.9137% ( 37) 00:19:24.273 10.647 - 10.705: 83.2306% ( 37) 00:19:24.273 10.705 - 10.764: 83.4104% ( 21) 00:19:24.273 10.764 - 10.822: 83.5474% ( 16) 00:19:24.273 10.822 - 10.880: 83.5731% ( 3) 00:19:24.273 10.880 - 10.938: 83.6245% ( 6) 00:19:24.273 10.938 - 10.996: 83.6845% ( 7) 00:19:24.273 10.996 - 11.055: 83.7444% ( 7) 00:19:24.273 11.055 - 11.113: 83.8386% ( 11) 00:19:24.273 11.113 - 11.171: 84.6095% ( 90) 00:19:24.273 11.171 - 11.229: 86.2795% ( 195) 00:19:24.273 11.229 - 11.287: 89.0116% ( 319) 00:19:24.273 11.287 - 11.345: 91.4183% ( 281) 00:19:24.273 11.345 - 11.404: 92.8914% ( 172) 00:19:24.273 11.404 - 11.462: 93.8506% ( 112) 00:19:24.273 11.462 - 11.520: 94.5101% ( 77) 00:19:24.273 11.520 - 11.578: 94.8441% ( 39) 00:19:24.273 11.578 - 11.636: 95.0668% ( 26) 00:19:24.273 11.636 - 11.695: 95.2381% ( 20) 00:19:24.273 11.695 - 11.753: 95.3923% ( 18) 00:19:24.273 11.753 - 11.811: 95.5464% ( 18) 00:19:24.273 11.811 - 11.869: 95.6064% ( 7) 00:19:24.273 11.869 - 11.927: 95.6578% ( 6) 00:19:24.273 11.927 - 11.985: 95.7263% ( 8) 00:19:24.273 11.985 - 12.044: 95.7777% ( 6) 00:19:24.273 12.044 - 12.102: 95.8376% ( 7) 00:19:24.273 12.102 - 12.160: 95.9318% ( 11) 00:19:24.273 12.160 - 12.218: 96.0517% ( 14) 00:19:24.273 12.218 - 12.276: 96.2059% ( 18) 00:19:24.273 12.276 - 12.335: 96.2744% ( 8) 00:19:24.273 12.335 - 12.393: 96.3087% ( 4) 00:19:24.273 12.393 - 12.451: 96.3515% ( 5) 00:19:24.273 12.451 - 12.509: 96.3601% ( 1) 00:19:24.273 12.509 - 12.567: 96.3686% ( 1) 00:19:24.273 12.567 - 12.625: 96.4200% ( 6) 00:19:24.273 12.625 - 12.684: 96.4457% ( 3) 00:19:24.273 12.684 - 12.742: 96.4628% ( 2) 00:19:24.273 12.742 - 12.800: 96.4800% ( 2) 00:19:24.273 12.800 - 12.858: 96.4885% ( 1) 00:19:24.273 12.858 - 12.916: 96.4971% ( 1) 00:19:24.273 12.916 - 12.975: 96.5142% ( 2) 00:19:24.273 12.975 - 13.033: 96.5228% ( 1) 00:19:24.273 13.033 - 13.091: 96.5313% ( 1) 00:19:24.273 13.091 - 13.149: 96.5485% ( 2) 00:19:24.273 13.149 - 13.207: 96.5656% ( 2) 00:19:24.273 13.207 - 13.265: 96.5827% ( 2) 00:19:24.273 13.265 - 13.324: 96.6170% ( 4) 00:19:24.273 13.324 - 13.382: 96.6341% ( 2) 00:19:24.273 13.382 - 13.440: 96.6769% ( 5) 00:19:24.273 13.440 - 13.498: 96.7026% ( 3) 00:19:24.273 13.498 - 13.556: 96.7712% ( 8) 00:19:24.273 13.556 - 13.615: 96.8054% ( 4) 00:19:24.273 13.615 - 13.673: 96.8568% ( 6) 00:19:24.273 13.673 - 13.731: 96.8739% ( 2) 00:19:24.273 13.731 - 13.789: 96.8911% ( 2) 00:19:24.273 13.789 - 13.847: 96.9253% ( 4) 00:19:24.273 13.847 - 13.905: 96.9938% ( 8) 00:19:24.273 13.905 - 13.964: 97.0281% ( 4) 00:19:24.273 13.964 - 14.022: 97.0880% ( 7) 00:19:24.273 14.022 - 14.080: 97.1052% ( 2) 00:19:24.273 14.080 - 14.138: 97.1394% ( 4) 00:19:24.273 14.138 - 14.196: 97.1651% ( 3) 00:19:24.273 14.196 - 14.255: 97.1994% ( 4) 00:19:24.273 14.255 - 14.313: 97.2422% ( 5) 00:19:24.273 14.313 - 14.371: 97.2508% ( 1) 00:19:24.273 14.371 - 14.429: 97.2679% ( 2) 00:19:24.273 14.429 - 14.487: 97.2765% ( 1) 00:19:24.273 14.487 - 14.545: 97.2850% ( 1) 00:19:24.273 14.545 - 14.604: 97.3279% ( 5) 00:19:24.273 14.662 - 14.720: 97.3535% ( 3) 00:19:24.273 14.720 - 14.778: 97.3792% ( 3) 00:19:24.273 14.778 - 14.836: 97.4049% ( 3) 00:19:24.273 14.836 - 14.895: 97.4478% ( 5) 00:19:24.273 14.895 - 15.011: 97.4906% ( 5) 00:19:24.273 15.011 - 15.127: 97.5248% ( 4) 00:19:24.273 15.127 - 15.244: 97.6019% ( 9) 00:19:24.273 15.244 - 15.360: 97.6790% ( 9) 00:19:24.273 15.360 - 15.476: 97.7304% ( 6) 00:19:24.273 15.476 - 15.593: 97.7646% ( 4) 00:19:24.273 15.593 - 15.709: 97.7989% ( 4) 00:19:24.273 15.709 - 15.825: 97.8246% ( 3) 00:19:24.273 15.825 - 15.942: 97.8417% ( 2) 00:19:24.273 15.942 - 16.058: 97.8845% ( 5) 00:19:24.273 16.058 - 16.175: 97.9017% ( 2) 00:19:24.273 16.175 - 16.291: 97.9359% ( 4) 00:19:24.273 16.291 - 16.407: 97.9702% ( 4) 00:19:24.273 16.407 - 16.524: 97.9959% ( 3) 00:19:24.273 16.524 - 16.640: 98.0387% ( 5) 00:19:24.273 16.640 - 16.756: 98.0644% ( 3) 00:19:24.273 16.756 - 16.873: 98.0815% ( 2) 00:19:24.273 16.873 - 16.989: 98.1072% ( 3) 00:19:24.273 16.989 - 17.105: 98.1501% ( 5) 00:19:24.273 17.105 - 17.222: 98.1586% ( 1) 00:19:24.273 17.222 - 17.338: 98.1757% ( 2) 00:19:24.273 17.338 - 17.455: 98.2357% ( 7) 00:19:24.273 17.455 - 17.571: 98.2614% ( 3) 00:19:24.273 17.571 - 17.687: 98.2785% ( 2) 00:19:24.273 17.687 - 17.804: 98.2956% ( 2) 00:19:24.273 17.804 - 17.920: 98.3128% ( 2) 00:19:24.273 18.036 - 18.153: 98.3385% ( 3) 00:19:24.273 18.153 - 18.269: 98.3470% ( 1) 00:19:24.273 18.269 - 18.385: 98.3556% ( 1) 00:19:24.273 18.502 - 18.618: 98.3642% ( 1) 00:19:24.273 18.618 - 18.735: 98.3813% ( 2) 00:19:24.273 18.735 - 18.851: 98.4070% ( 3) 00:19:24.273 18.851 - 18.967: 98.4327% ( 3) 00:19:24.273 18.967 - 19.084: 98.4412% ( 1) 00:19:24.273 19.084 - 19.200: 98.5098% ( 8) 00:19:24.273 19.200 - 19.316: 98.7496% ( 28) 00:19:24.273 19.316 - 19.433: 98.8095% ( 7) 00:19:24.273 19.433 - 19.549: 98.8438% ( 4) 00:19:24.273 19.549 - 19.665: 98.8523% ( 1) 00:19:24.273 19.665 - 19.782: 98.8609% ( 1) 00:19:24.273 19.782 - 19.898: 98.8695% ( 1) 00:19:24.273 19.898 - 20.015: 98.8866% ( 2) 00:19:24.273 20.015 - 20.131: 98.9123% ( 3) 00:19:24.273 20.131 - 20.247: 98.9294% ( 2) 00:19:24.273 20.247 - 20.364: 98.9380% ( 1) 00:19:24.273 20.480 - 20.596: 98.9551% ( 2) 00:19:24.273 20.713 - 20.829: 98.9894% ( 4) 00:19:24.273 20.829 - 20.945: 98.9979% ( 1) 00:19:24.273 20.945 - 21.062: 99.0151% ( 2) 00:19:24.273 21.062 - 21.178: 99.0236% ( 1) 00:19:24.273 21.178 - 21.295: 99.0579% ( 4) 00:19:24.273 21.295 - 21.411: 99.0665% ( 1) 00:19:24.273 21.527 - 21.644: 99.1093% ( 5) 00:19:24.273 21.644 - 21.760: 99.1178% ( 1) 00:19:24.273 21.760 - 21.876: 99.1350% ( 2) 00:19:24.273 21.993 - 22.109: 99.1521% ( 2) 00:19:24.273 22.225 - 22.342: 99.1692% ( 2) 00:19:24.273 22.691 - 22.807: 99.1864% ( 2) 00:19:24.273 22.807 - 22.924: 99.1949% ( 1) 00:19:24.273 23.273 - 23.389: 99.2035% ( 1) 00:19:24.273 23.622 - 23.738: 99.2206% ( 2) 00:19:24.273 23.855 - 23.971: 99.2720% ( 6) 00:19:24.273 23.971 - 24.087: 99.2806% ( 1) 00:19:24.273 24.087 - 24.204: 99.3748% ( 11) 00:19:24.273 24.204 - 24.320: 99.4776% ( 12) 00:19:24.273 24.320 - 24.436: 99.5289% ( 6) 00:19:24.273 24.436 - 24.553: 99.6060% ( 9) 00:19:24.273 24.553 - 24.669: 99.6745% ( 8) 00:19:24.273 24.669 - 24.785: 99.7259% ( 6) 00:19:24.273 24.785 - 24.902: 99.7345% ( 1) 00:19:24.273 24.902 - 25.018: 99.7431% ( 1) 00:19:24.273 25.018 - 25.135: 99.7602% ( 2) 00:19:24.273 25.135 - 25.251: 99.7859% ( 3) 00:19:24.273 25.251 - 25.367: 99.8030% ( 2) 00:19:24.273 25.367 - 25.484: 99.8116% ( 1) 00:19:24.273 25.484 - 25.600: 99.8201% ( 1) 00:19:24.273 25.600 - 25.716: 99.8373% ( 2) 00:19:24.273 25.716 - 25.833: 99.8458% ( 1) 00:19:24.273 25.833 - 25.949: 99.8630% ( 2) 00:19:24.273 26.065 - 26.182: 99.8801% ( 2) 00:19:24.273 26.182 - 26.298: 99.8887% ( 1) 00:19:24.273 26.298 - 26.415: 99.8972% ( 1) 00:19:24.273 26.531 - 26.647: 99.9058% ( 1) 00:19:24.273 27.462 - 27.578: 99.9229% ( 2) 00:19:24.273 27.811 - 27.927: 99.9315% ( 1) 00:19:24.273 29.440 - 29.556: 99.9400% ( 1) 00:19:24.273 29.789 - 30.022: 99.9486% ( 1) 00:19:24.273 31.418 - 31.651: 99.9572% ( 1) 00:19:24.273 32.815 - 33.047: 99.9743% ( 2) 00:19:24.273 36.538 - 36.771: 99.9829% ( 1) 00:19:24.273 38.866 - 39.098: 99.9914% ( 1) 00:19:24.273 56.320 - 56.553: 100.0000% ( 1) 00:19:24.273 00:19:24.273 Complete histogram 00:19:24.273 ================== 00:19:24.273 Range in us Cumulative Count 00:19:24.273 5.818 - 5.847: 0.0086% ( 1) 00:19:24.273 5.935 - 5.964: 0.0257% ( 2) 00:19:24.273 5.964 - 5.993: 0.1028% ( 9) 00:19:24.273 5.993 - 6.022: 0.2141% ( 13) 00:19:24.273 6.051 - 6.080: 0.2312% ( 2) 00:19:24.273 6.080 - 6.109: 0.2398% ( 1) 00:19:24.273 6.109 - 6.138: 0.4282% ( 22) 00:19:24.273 6.138 - 6.167: 0.8051% ( 44) 00:19:24.273 6.167 - 6.196: 1.2590% ( 53) 00:19:24.273 6.196 - 6.225: 1.6101% ( 41) 00:19:24.273 6.225 - 6.255: 1.7044% ( 11) 00:19:24.273 6.255 - 6.284: 1.7643% ( 7) 00:19:24.273 6.284 - 6.313: 2.5951% ( 97) 00:19:24.273 6.313 - 6.342: 9.2326% ( 775) 00:19:24.274 6.342 - 6.371: 27.5522% ( 2139) 00:19:24.274 6.371 - 6.400: 48.2956% ( 2422) 00:19:24.274 6.400 - 6.429: 59.2240% ( 1276) 00:19:24.274 6.429 - 6.458: 63.2751% ( 473) 00:19:24.274 6.458 - 6.487: 65.4848% ( 258) 00:19:24.274 6.487 - 6.516: 67.0007% ( 177) 00:19:24.274 6.516 - 6.545: 67.7629% ( 89) 00:19:24.274 6.545 - 6.575: 68.5081% ( 87) 00:19:24.274 6.575 - 6.604: 69.5015% ( 116) 00:19:24.274 6.604 - 6.633: 70.2467% ( 87) 00:19:24.274 6.633 - 6.662: 70.7948% ( 64) 00:19:24.274 6.662 - 6.691: 71.1117% ( 37) 00:19:24.274 6.691 - 6.720: 71.4543% ( 40) 00:19:24.274 6.720 - 6.749: 71.6598% ( 24) 00:19:24.274 6.749 - 6.778: 71.9596% ( 35) 00:19:24.274 6.778 - 6.807: 72.8160% ( 100) 00:19:24.274 6.807 - 6.836: 73.5868% ( 90) 00:19:24.274 6.836 - 6.865: 73.9808% ( 46) 00:19:24.274 6.865 - 6.895: 74.2806% ( 35) 00:19:24.274 6.895 - 6.924: 74.3491% ( 8) 00:19:24.274 6.924 - 6.953: 74.4262% ( 9) 00:19:24.274 6.953 - 6.982: 74.4433% ( 2) 00:19:24.274 6.982 - 7.011: 74.5118% ( 8) 00:19:24.274 7.011 - 7.040: 74.5204% ( 1) 00:19:24.274 7.040 - 7.069: 74.5975% ( 9) 00:19:24.274 7.069 - 7.098: 74.6403% ( 5) 00:19:24.274 7.098 - 7.127: 74.6660% ( 3) 00:19:24.274 7.127 - 7.156: 74.7088% ( 5) 00:19:24.274 7.156 - 7.185: 74.7259% ( 2) 00:19:24.274 7.215 - 7.244: 74.7516% ( 3) 00:19:24.274 7.244 - 7.273: 74.7859% ( 4) 00:19:24.274 7.302 - 7.331: 74.7945% ( 1) 00:19:24.274 7.331 - 7.360: 74.8030% ( 1) 00:19:24.274 7.360 - 7.389: 74.8116% ( 1) 00:19:24.274 7.389 - 7.418: 74.8201% ( 1) 00:19:24.274 7.505 - 7.564: 74.8373% ( 2) 00:19:24.274 7.564 - 7.622: 74.8544% ( 2) 00:19:24.274 7.622 - 7.680: 74.8801% ( 3) 00:19:24.274 7.680 - 7.738: 74.9058% ( 3) 00:19:24.274 7.738 - 7.796: 74.9144% ( 1) 00:19:24.274 7.796 - 7.855: 74.9229% ( 1) 00:19:24.274 7.855 - 7.913: 76.4988% ( 184) 00:19:24.274 7.913 - 7.971: 82.9479% ( 753) 00:19:24.274 7.971 - 8.029: 85.7143% ( 323) 00:19:24.274 8.029 - 8.087: 86.5536% ( 98) 00:19:24.274 8.087 - 8.145: 86.7592% ( 24) 00:19:24.274 8.145 - 8.204: 86.8448% ( 10) 00:19:24.274 8.204 - 8.262: 86.9219% ( 9) 00:19:24.274 8.262 - 8.320: 87.0332% ( 13) 00:19:24.274 8.320 - 8.378: 87.6156% ( 68) 00:19:24.274 8.378 - 8.436: 90.1250% ( 293) 00:19:24.274 8.436 - 8.495: 93.1398% ( 352) 00:19:24.274 8.495 - 8.553: 94.5786% ( 168) 00:19:24.274 8.553 - 8.611: 95.2980% ( 84) 00:19:24.274 8.611 - 8.669: 95.5892% ( 34) 00:19:24.274 8.669 - 8.727: 95.7520% ( 19) 00:19:24.274 8.727 - 8.785: 95.8976% ( 17) 00:19:24.274 8.785 - 8.844: 95.9490% ( 6) 00:19:24.274 8.844 - 8.902: 95.9832% ( 4) 00:19:24.274 8.902 - 8.960: 96.0175% ( 4) 00:19:24.274 8.960 - 9.018: 96.0946% ( 9) 00:19:24.274 9.018 - 9.076: 96.1545% ( 7) 00:19:24.274 9.076 - 9.135: 96.1802% ( 3) 00:19:24.274 9.135 - 9.193: 96.2230% ( 5) 00:19:24.274 9.193 - 9.251: 96.2658% ( 5) 00:19:24.274 9.251 - 9.309: 96.3344% ( 8) 00:19:24.274 9.309 - 9.367: 96.3772% ( 5) 00:19:24.274 9.367 - 9.425: 96.4286% ( 6) 00:19:24.274 9.425 - 9.484: 96.4714% ( 5) 00:19:24.274 9.484 - 9.542: 96.5313% ( 7) 00:19:24.274 9.542 - 9.600: 96.5827% ( 6) 00:19:24.274 9.600 - 9.658: 96.6170% ( 4) 00:19:24.274 9.658 - 9.716: 96.6684% ( 6) 00:19:24.274 9.716 - 9.775: 96.7112% ( 5) 00:19:24.274 9.775 - 9.833: 96.7198% ( 1) 00:19:24.274 9.833 - 9.891: 96.7455% ( 3) 00:19:24.274 9.891 - 9.949: 96.7712% ( 3) 00:19:24.274 9.949 - 10.007: 96.8311% ( 7) 00:19:24.274 10.007 - 10.065: 96.8654% ( 4) 00:19:24.274 10.065 - 10.124: 96.9168% ( 6) 00:19:24.274 10.124 - 10.182: 96.9767% ( 7) 00:19:24.274 10.182 - 10.240: 97.0624% ( 10) 00:19:24.274 10.240 - 10.298: 97.1223% ( 7) 00:19:24.274 10.298 - 10.356: 97.1566% ( 4) 00:19:24.274 10.356 - 10.415: 97.2593% ( 12) 00:19:24.274 10.415 - 10.473: 97.2936% ( 4) 00:19:24.274 10.473 - 10.531: 97.3535% ( 7) 00:19:24.274 10.531 - 10.589: 97.3964% ( 5) 00:19:24.274 10.589 - 10.647: 97.4563% ( 7) 00:19:24.274 10.647 - 10.705: 97.4649% ( 1) 00:19:24.274 10.705 - 10.764: 97.4734% ( 1) 00:19:24.274 10.764 - 10.822: 97.4906% ( 2) 00:19:24.274 10.822 - 10.880: 97.5077% ( 2) 00:19:24.274 10.880 - 10.938: 97.5420% ( 4) 00:19:24.274 10.938 - 10.996: 97.6190% ( 9) 00:19:24.274 10.996 - 11.055: 97.6704% ( 6) 00:19:24.274 11.055 - 11.113: 97.7133% ( 5) 00:19:24.274 11.113 - 11.171: 97.7304% ( 2) 00:19:24.274 11.171 - 11.229: 97.7561% ( 3) 00:19:24.274 11.229 - 11.287: 97.7646% ( 1) 00:19:24.274 11.287 - 11.345: 97.7989% ( 4) 00:19:24.274 11.345 - 11.404: 97.8417% ( 5) 00:19:24.274 11.404 - 11.462: 97.8503% ( 1) 00:19:24.274 11.462 - 11.520: 97.8845% ( 4) 00:19:24.274 11.520 - 11.578: 97.8931% ( 1) 00:19:24.274 11.578 - 11.636: 97.9017% ( 1) 00:19:24.274 11.636 - 11.695: 97.9188% ( 2) 00:19:24.274 11.695 - 11.753: 97.9274% ( 1) 00:19:24.274 11.753 - 11.811: 97.9531% ( 3) 00:19:24.274 11.811 - 11.869: 97.9702% ( 2) 00:19:24.274 12.044 - 12.102: 97.9788% ( 1) 00:19:24.274 12.160 - 12.218: 98.0045% ( 3) 00:19:24.274 12.276 - 12.335: 98.0130% ( 1) 00:19:24.274 12.335 - 12.393: 98.0216% ( 1) 00:19:24.274 12.393 - 12.451: 98.0473% ( 3) 00:19:24.274 12.451 - 12.509: 98.0558% ( 1) 00:19:24.274 12.509 - 12.567: 98.0644% ( 1) 00:19:24.274 12.625 - 12.684: 98.0901% ( 3) 00:19:24.274 12.742 - 12.800: 98.1072% ( 2) 00:19:24.274 12.858 - 12.916: 98.1244% ( 2) 00:19:24.274 12.916 - 12.975: 98.1329% ( 1) 00:19:24.274 12.975 - 13.033: 98.1415% ( 1) 00:19:24.274 13.033 - 13.091: 98.1501% ( 1) 00:19:24.274 13.091 - 13.149: 98.1586% ( 1) 00:19:24.274 13.149 - 13.207: 98.1757% ( 2) 00:19:24.274 13.265 - 13.324: 98.1929% ( 2) 00:19:24.532 13.324 - 13.382: 98.2014% ( 1) 00:19:24.532 13.556 - 13.615: 98.2100% ( 1) 00:19:24.532 13.615 - 13.673: 98.2357% ( 3) 00:19:24.532 13.673 - 13.731: 98.2443% ( 1) 00:19:24.532 13.789 - 13.847: 98.2700% ( 3) 00:19:24.532 13.847 - 13.905: 98.2956% ( 3) 00:19:24.532 13.905 - 13.964: 98.3299% ( 4) 00:19:24.532 13.964 - 14.022: 98.3470% ( 2) 00:19:24.532 14.022 - 14.080: 98.4498% ( 12) 00:19:24.532 14.080 - 14.138: 98.5954% ( 17) 00:19:24.532 14.138 - 14.196: 98.6811% ( 10) 00:19:24.532 14.196 - 14.255: 98.7067% ( 3) 00:19:24.532 14.255 - 14.313: 98.7239% ( 2) 00:19:24.532 14.313 - 14.371: 98.7324% ( 1) 00:19:24.532 14.371 - 14.429: 98.7496% ( 2) 00:19:24.532 14.429 - 14.487: 98.7581% ( 1) 00:19:24.532 14.545 - 14.604: 98.7667% ( 1) 00:19:24.532 14.662 - 14.720: 98.7924% ( 3) 00:19:24.532 14.720 - 14.778: 98.8267% ( 4) 00:19:24.532 14.778 - 14.836: 98.8352% ( 1) 00:19:24.532 14.895 - 15.011: 98.8609% ( 3) 00:19:24.532 15.360 - 15.476: 98.8780% ( 2) 00:19:24.532 15.476 - 15.593: 98.8866% ( 1) 00:19:24.532 15.593 - 15.709: 98.8952% ( 1) 00:19:24.532 15.825 - 15.942: 98.9037% ( 1) 00:19:24.532 15.942 - 16.058: 98.9209% ( 2) 00:19:24.532 16.175 - 16.291: 98.9294% ( 1) 00:19:24.532 16.291 - 16.407: 98.9466% ( 2) 00:19:24.532 16.407 - 16.524: 98.9808% ( 4) 00:19:24.532 16.524 - 16.640: 98.9979% ( 2) 00:19:24.532 16.640 - 16.756: 99.0151% ( 2) 00:19:24.532 16.873 - 16.989: 99.0236% ( 1) 00:19:24.532 16.989 - 17.105: 99.0408% ( 2) 00:19:24.532 17.105 - 17.222: 99.0493% ( 1) 00:19:24.532 17.338 - 17.455: 99.0665% ( 2) 00:19:24.532 17.571 - 17.687: 99.0922% ( 3) 00:19:24.532 17.687 - 17.804: 99.1007% ( 1) 00:19:24.532 17.804 - 17.920: 99.1093% ( 1) 00:19:24.532 17.920 - 18.036: 99.1264% ( 2) 00:19:24.532 18.036 - 18.153: 99.1350% ( 1) 00:19:24.532 18.153 - 18.269: 99.1521% ( 2) 00:19:24.532 18.269 - 18.385: 99.1607% ( 1) 00:19:24.532 18.385 - 18.502: 99.1692% ( 1) 00:19:24.532 18.735 - 18.851: 99.1778% ( 1) 00:19:24.532 18.967 - 19.084: 99.1864% ( 1) 00:19:24.532 19.084 - 19.200: 99.2121% ( 3) 00:19:24.532 19.433 - 19.549: 99.2292% ( 2) 00:19:24.532 19.665 - 19.782: 99.2378% ( 1) 00:19:24.532 20.131 - 20.247: 99.2463% ( 1) 00:19:24.532 20.364 - 20.480: 99.2634% ( 2) 00:19:24.532 20.596 - 20.713: 99.2720% ( 1) 00:19:24.532 20.829 - 20.945: 99.2977% ( 3) 00:19:24.532 20.945 - 21.062: 99.3320% ( 4) 00:19:24.532 21.062 - 21.178: 99.3748% ( 5) 00:19:24.532 21.178 - 21.295: 99.4519% ( 9) 00:19:24.532 21.295 - 21.411: 99.5033% ( 6) 00:19:24.532 21.411 - 21.527: 99.5803% ( 9) 00:19:24.532 21.527 - 21.644: 99.6660% ( 10) 00:19:24.532 21.644 - 21.760: 99.7516% ( 10) 00:19:24.532 21.760 - 21.876: 99.7773% ( 3) 00:19:24.532 21.876 - 21.993: 99.8373% ( 7) 00:19:24.532 21.993 - 22.109: 99.8458% ( 1) 00:19:24.532 22.109 - 22.225: 99.8630% ( 2) 00:19:24.532 22.342 - 22.458: 99.8715% ( 1) 00:19:24.532 22.575 - 22.691: 99.8887% ( 2) 00:19:24.532 22.807 - 22.924: 99.8972% ( 1) 00:19:24.532 22.924 - 23.040: 99.9315% ( 4) 00:19:24.532 23.156 - 23.273: 99.9400% ( 1) 00:19:24.532 23.273 - 23.389: 99.9486% ( 1) 00:19:24.532 23.389 - 23.505: 99.9572% ( 1) 00:19:24.532 24.436 - 24.553: 99.9657% ( 1) 00:19:24.532 29.673 - 29.789: 99.9743% ( 1) 00:19:24.532 31.884 - 32.116: 99.9829% ( 1) 00:19:24.532 39.796 - 40.029: 99.9914% ( 1) 00:19:24.532 117.760 - 118.226: 100.0000% ( 1) 00:19:24.532 00:19:24.532 00:19:24.532 real 0m1.621s 00:19:24.532 user 0m1.029s 00:19:24.532 sys 0m0.592s 00:19:24.532 17:38:20 nvme.nvme_overhead -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:24.532 17:38:20 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:19:24.532 ************************************ 00:19:24.532 END TEST nvme_overhead 00:19:24.532 ************************************ 00:19:24.532 17:38:20 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:24.532 17:38:20 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:19:24.532 17:38:20 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:19:24.533 17:38:20 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:24.533 17:38:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:24.533 ************************************ 00:19:24.533 START TEST nvme_arbitration 00:19:24.533 ************************************ 00:19:24.533 17:38:20 nvme.nvme_arbitration -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:19:25.098 EAL: TSC is not safe to use in SMP mode 00:19:25.098 EAL: TSC is not invariant 00:19:25.098 [2024-07-15 17:38:20.690113] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:29.281 Initializing NVMe Controllers 00:19:29.281 Attaching to 0000:00:10.0 00:19:29.281 Attached to 0000:00:10.0 00:19:29.281 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:19:29.281 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:19:29.281 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:19:29.281 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:19:29.281 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:19:29.281 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:19:29.281 Initialization complete. Launching workers. 00:19:29.281 Starting thread on core 1 with urgent priority queue 00:19:29.281 Starting thread on core 2 with urgent priority queue 00:19:29.281 Starting thread on core 3 with urgent priority queue 00:19:29.282 Starting thread on core 0 with urgent priority queue 00:19:29.282 QEMU NVMe Ctrl (12340 ) core 0: 6234.33 IO/s 16.04 secs/100000 ios 00:19:29.282 QEMU NVMe Ctrl (12340 ) core 1: 6221.00 IO/s 16.07 secs/100000 ios 00:19:29.282 QEMU NVMe Ctrl (12340 ) core 2: 6281.33 IO/s 15.92 secs/100000 ios 00:19:29.282 QEMU NVMe Ctrl (12340 ) core 3: 6198.67 IO/s 16.13 secs/100000 ios 00:19:29.282 ======================================================== 00:19:29.282 00:19:29.282 00:19:29.282 real 0m4.240s 00:19:29.282 user 0m12.690s 00:19:29.282 sys 0m0.563s 00:19:29.282 17:38:24 nvme.nvme_arbitration -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:29.282 17:38:24 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:19:29.282 ************************************ 00:19:29.282 END TEST nvme_arbitration 00:19:29.282 ************************************ 00:19:29.282 17:38:24 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:29.282 17:38:24 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:19:29.282 17:38:24 nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:29.282 17:38:24 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:29.282 17:38:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:29.282 ************************************ 00:19:29.282 START TEST nvme_single_aen 00:19:29.282 ************************************ 00:19:29.282 17:38:24 nvme.nvme_single_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:19:29.282 EAL: TSC is not safe to use in SMP mode 00:19:29.282 EAL: TSC is not invariant 00:19:29.282 [2024-07-15 17:38:24.964744] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:29.282 Asynchronous Event Request test 00:19:29.282 Attaching to 0000:00:10.0 00:19:29.282 Attached to 0000:00:10.0 00:19:29.282 Reset controller to setup AER completions for this process 00:19:29.282 Registering asynchronous event callbacks... 00:19:29.282 Getting orig temperature thresholds of all controllers 00:19:29.282 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:29.282 Setting all controllers temperature threshold low to trigger AER 00:19:29.282 Waiting for all controllers temperature threshold to be set lower 00:19:29.282 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:29.282 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:19:29.282 Waiting for all controllers to trigger AER and reset threshold 00:19:29.282 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:29.282 Cleaning up... 00:19:29.282 00:19:29.282 real 0m0.577s 00:19:29.282 user 0m0.008s 00:19:29.282 sys 0m0.568s 00:19:29.282 17:38:25 nvme.nvme_single_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:29.282 ************************************ 00:19:29.282 END TEST nvme_single_aen 00:19:29.282 ************************************ 00:19:29.282 17:38:25 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:19:29.282 17:38:25 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:29.282 17:38:25 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:19:29.282 17:38:25 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:29.282 17:38:25 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:29.282 17:38:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:29.282 ************************************ 00:19:29.282 START TEST nvme_doorbell_aers 00:19:29.282 ************************************ 00:19:29.282 17:38:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1123 -- # nvme_doorbell_aers 00:19:29.282 17:38:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:19:29.282 17:38:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:19:29.282 17:38:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:19:29.282 17:38:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:19:29.282 17:38:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:19:29.282 17:38:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:19:29.282 17:38:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:29.282 17:38:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:29.282 17:38:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:19:29.282 17:38:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:19:29.282 17:38:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:19:29.282 17:38:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:19:29.282 17:38:25 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:19:29.850 EAL: TSC is not safe to use in SMP mode 00:19:29.850 EAL: TSC is not invariant 00:19:29.850 [2024-07-15 17:38:25.636180] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:30.109 Executing: test_write_invalid_db 00:19:30.109 Waiting for AER completion... 00:19:30.109 Asynchronous Event received. 00:19:30.109 Error Informaton Log Page received. 00:19:30.109 Success: test_write_invalid_db 00:19:30.109 00:19:30.109 Executing: test_invalid_db_write_overflow_sq 00:19:30.109 Waiting for AER completion... 00:19:30.109 Asynchronous Event received. 00:19:30.109 Error Informaton Log Page received. 00:19:30.109 Success: test_invalid_db_write_overflow_sq 00:19:30.109 00:19:30.109 Executing: test_invalid_db_write_overflow_cq 00:19:30.109 Waiting for AER completion... 00:19:30.109 Asynchronous Event received. 00:19:30.109 Error Informaton Log Page received. 00:19:30.109 Success: test_invalid_db_write_overflow_cq 00:19:30.109 00:19:30.109 00:19:30.109 real 0m0.632s 00:19:30.109 user 0m0.045s 00:19:30.109 sys 0m0.599s 00:19:30.109 ************************************ 00:19:30.109 END TEST nvme_doorbell_aers 00:19:30.109 ************************************ 00:19:30.109 17:38:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:30.109 17:38:25 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:19:30.109 17:38:25 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:30.109 17:38:25 nvme -- nvme/nvme.sh@97 -- # uname 00:19:30.109 17:38:25 nvme -- nvme/nvme.sh@97 -- # '[' FreeBSD '!=' FreeBSD ']' 00:19:30.109 17:38:25 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:19:30.109 17:38:25 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:30.109 17:38:25 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:30.109 17:38:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:30.109 ************************************ 00:19:30.109 START TEST bdev_nvme_reset_stuck_adm_cmd 00:19:30.109 ************************************ 00:19:30.109 17:38:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:19:30.109 * Looking for test storage... 00:19:30.109 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:19:30.109 17:38:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:19:30.109 17:38:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:19:30.109 17:38:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:19:30.109 17:38:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:19:30.109 17:38:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:19:30.109 17:38:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:19:30.109 17:38:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:19:30.109 17:38:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:19:30.109 17:38:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:19:30.109 17:38:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:19:30.109 17:38:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:19:30.109 17:38:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:19:30.109 17:38:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:30.109 17:38:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:30.109 17:38:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:19:30.109 17:38:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:19:30.109 17:38:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:19:30.109 17:38:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:19:30.109 17:38:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:19:30.109 17:38:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:19:30.109 17:38:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=69014 00:19:30.109 17:38:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:30.109 17:38:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:19:30.109 17:38:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 69014 00:19:30.109 17:38:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@829 -- # '[' -z 69014 ']' 00:19:30.109 17:38:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:30.109 17:38:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:30.109 17:38:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:30.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:30.109 17:38:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:30.109 17:38:25 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:19:30.109 [2024-07-15 17:38:25.910449] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:19:30.109 [2024-07-15 17:38:25.910627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:30.676 EAL: TSC is not safe to use in SMP mode 00:19:30.676 EAL: TSC is not invariant 00:19:30.676 [2024-07-15 17:38:26.465111] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:30.935 [2024-07-15 17:38:26.548122] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:30.935 [2024-07-15 17:38:26.548188] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:19:30.935 [2024-07-15 17:38:26.548197] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:19:30.935 [2024-07-15 17:38:26.548205] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:19:30.935 [2024-07-15 17:38:26.552090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.935 [2024-07-15 17:38:26.552299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:30.935 [2024-07-15 17:38:26.552167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:30.935 [2024-07-15 17:38:26.552293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:31.195 17:38:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:31.195 17:38:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # return 0 00:19:31.195 17:38:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:19:31.195 17:38:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.195 17:38:26 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:19:31.195 [2024-07-15 17:38:26.952426] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:31.195 nvme0n1 00:19:31.195 17:38:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.195 17:38:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:19:31.195 17:38:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_XXXXX.txt 00:19:31.195 17:38:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:19:31.195 17:38:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.195 17:38:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:19:31.195 true 00:19:31.195 17:38:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.195 17:38:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:19:31.195 17:38:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1721065107 00:19:31.195 17:38:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=69026 00:19:31.195 17:38:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:19:31.195 17:38:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:31.195 17:38:27 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:19:33.731 [2024-07-15 17:38:29.036521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:19:33.731 [2024-07-15 17:38:29.036689] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:33.731 [2024-07-15 17:38:29.036708] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:33.731 [2024-07-15 17:38:29.036718] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:33.731 [2024-07-15 17:38:29.038076] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.731 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 69026 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 69026 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 69026 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_XXXXX.txt 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /tmp//sh-np.lUVHMr 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /tmp//sh-np.rBJas9 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_XXXXX.txt 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 69014 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@948 -- # '[' -z 69014 ']' 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # kill -0 69014 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # uname 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps -c -o command 69014 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # tail -1 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:19:33.731 killing process with pid 69014 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69014' 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@967 -- # kill 69014 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # wait 69014 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:19:33.731 00:19:33.731 real 0m3.652s 00:19:33.731 user 0m11.918s 00:19:33.731 sys 0m0.799s 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:33.731 ************************************ 00:19:33.731 END TEST bdev_nvme_reset_stuck_adm_cmd 00:19:33.731 ************************************ 00:19:33.731 17:38:29 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:19:33.731 17:38:29 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:33.731 17:38:29 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:19:33.731 17:38:29 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:19:33.731 17:38:29 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:33.731 17:38:29 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:33.731 17:38:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:33.731 ************************************ 00:19:33.731 START TEST nvme_fio 00:19:33.731 ************************************ 00:19:33.731 17:38:29 nvme.nvme_fio -- common/autotest_common.sh@1123 -- # nvme_fio_test 00:19:33.731 17:38:29 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:19:33.731 17:38:29 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:19:33.731 17:38:29 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:19:33.731 17:38:29 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:19:33.731 17:38:29 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:19:33.731 17:38:29 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:33.731 17:38:29 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:33.731 17:38:29 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:19:33.731 17:38:29 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:19:33.731 17:38:29 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:19:33.731 17:38:29 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0') 00:19:33.731 17:38:29 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:19:33.731 17:38:29 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:19:33.731 17:38:29 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:19:33.731 17:38:29 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:19:34.300 EAL: TSC is not safe to use in SMP mode 00:19:34.300 EAL: TSC is not invariant 00:19:34.300 [2024-07-15 17:38:30.064112] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:34.300 17:38:30 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:19:34.300 17:38:30 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:19:34.868 EAL: TSC is not safe to use in SMP mode 00:19:34.868 EAL: TSC is not invariant 00:19:34.868 [2024-07-15 17:38:30.681505] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:35.127 17:38:30 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:19:35.127 17:38:30 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:19:35.127 17:38:30 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:19:35.127 17:38:30 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:35.127 17:38:30 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:35.127 17:38:30 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:35.127 17:38:30 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:35.127 17:38:30 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:19:35.127 17:38:30 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:35.127 17:38:30 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:35.127 17:38:30 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:35.127 17:38:30 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:19:35.127 17:38:30 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:35.127 17:38:30 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:35.127 17:38:30 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:35.127 17:38:30 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:35.127 17:38:30 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:35.127 17:38:30 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:35.127 17:38:30 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:35.127 17:38:30 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:35.127 17:38:30 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:35.127 17:38:30 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:35.127 17:38:30 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:19:35.127 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:35.127 fio-3.35 00:19:35.127 Starting 1 thread 00:19:35.693 EAL: TSC is not safe to use in SMP mode 00:19:35.693 EAL: TSC is not invariant 00:19:35.693 [2024-07-15 17:38:31.418044] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:38.218 00:19:38.218 test: (groupid=0, jobs=1): err= 0: pid=101537: Mon Jul 15 17:38:33 2024 00:19:38.218 read: IOPS=46.1k, BW=180MiB/s (189MB/s)(360MiB/2001msec) 00:19:38.218 slat (nsec): min=391, max=51548, avg=602.93, stdev=924.81 00:19:38.218 clat (usec): min=305, max=5245, avg=1389.85, stdev=241.00 00:19:38.218 lat (usec): min=305, max=5247, avg=1390.45, stdev=241.06 00:19:38.218 clat percentiles (usec): 00:19:38.218 | 1.00th=[ 865], 5.00th=[ 1123], 10.00th=[ 1156], 20.00th=[ 1221], 00:19:38.218 | 30.00th=[ 1270], 40.00th=[ 1319], 50.00th=[ 1369], 60.00th=[ 1418], 00:19:38.218 | 70.00th=[ 1467], 80.00th=[ 1532], 90.00th=[ 1614], 95.00th=[ 1729], 00:19:38.218 | 99.00th=[ 2245], 99.50th=[ 2507], 99.90th=[ 3294], 99.95th=[ 3490], 00:19:38.218 | 99.99th=[ 4015] 00:19:38.218 bw ( KiB/s): min=167152, max=187280, per=98.00%, avg=180568.00, stdev=11618.60, samples=3 00:19:38.218 iops : min=41788, max=46820, avg=45142.00, stdev=2904.65, samples=3 00:19:38.218 write: IOPS=45.9k, BW=179MiB/s (188MB/s)(359MiB/2001msec); 0 zone resets 00:19:38.218 slat (nsec): min=409, max=30103, avg=804.75, stdev=1250.20 00:19:38.218 clat (usec): min=295, max=5102, avg=1389.68, stdev=244.52 00:19:38.218 lat (usec): min=310, max=5103, avg=1390.49, stdev=244.59 00:19:38.218 clat percentiles (usec): 00:19:38.218 | 1.00th=[ 807], 5.00th=[ 1123], 10.00th=[ 1156], 20.00th=[ 1221], 00:19:38.218 | 30.00th=[ 1270], 40.00th=[ 1319], 50.00th=[ 1369], 60.00th=[ 1418], 00:19:38.218 | 70.00th=[ 1467], 80.00th=[ 1532], 90.00th=[ 1614], 95.00th=[ 1729], 00:19:38.218 | 99.00th=[ 2278], 99.50th=[ 2507], 99.90th=[ 3326], 99.95th=[ 3621], 00:19:38.218 | 99.99th=[ 4686] 00:19:38.218 bw ( KiB/s): min=167208, max=186888, per=97.83%, avg=179661.33, stdev=10831.16, samples=3 00:19:38.218 iops : min=41802, max=46720, avg=44915.33, stdev=2707.61, samples=3 00:19:38.218 lat (usec) : 500=0.48%, 750=0.41%, 1000=0.54% 00:19:38.218 lat (msec) : 2=96.71%, 4=1.84%, 10=0.02% 00:19:38.218 cpu : usr=100.00%, sys=0.00%, ctx=23, majf=0, minf=2 00:19:38.218 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:19:38.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:38.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:38.218 issued rwts: total=92173,91870,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:38.218 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:38.218 00:19:38.218 Run status group 0 (all jobs): 00:19:38.218 READ: bw=180MiB/s (189MB/s), 180MiB/s-180MiB/s (189MB/s-189MB/s), io=360MiB (378MB), run=2001-2001msec 00:19:38.218 WRITE: bw=179MiB/s (188MB/s), 179MiB/s-179MiB/s (188MB/s-188MB/s), io=359MiB (376MB), run=2001-2001msec 00:19:39.153 17:38:34 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:19:39.153 17:38:34 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:19:39.153 ************************************ 00:19:39.153 END TEST nvme_fio 00:19:39.153 ************************************ 00:19:39.153 00:19:39.153 real 0m5.290s 00:19:39.153 user 0m2.344s 00:19:39.153 sys 0m2.862s 00:19:39.153 17:38:34 nvme.nvme_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:39.153 17:38:34 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:19:39.153 17:38:34 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:39.153 00:19:39.153 real 0m25.993s 00:19:39.153 user 0m30.924s 00:19:39.153 sys 0m12.861s 00:19:39.153 17:38:34 nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:39.153 17:38:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:39.153 ************************************ 00:19:39.153 END TEST nvme 00:19:39.153 ************************************ 00:19:39.153 17:38:34 -- common/autotest_common.sh@1142 -- # return 0 00:19:39.153 17:38:34 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:19:39.153 17:38:34 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:19:39.153 17:38:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:39.153 17:38:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:39.153 17:38:34 -- common/autotest_common.sh@10 -- # set +x 00:19:39.153 ************************************ 00:19:39.153 START TEST nvme_scc 00:19:39.153 ************************************ 00:19:39.153 17:38:34 nvme_scc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:19:39.153 * Looking for test storage... 00:19:39.153 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:19:39.153 17:38:34 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:19:39.153 17:38:34 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:19:39.153 17:38:34 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:19:39.153 17:38:34 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:39.153 17:38:34 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:39.153 17:38:34 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:39.153 17:38:34 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:39.153 17:38:34 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:39.153 17:38:34 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:19:39.153 17:38:34 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:19:39.153 17:38:34 nvme_scc -- paths/export.sh@4 -- # export PATH 00:19:39.154 17:38:34 nvme_scc -- paths/export.sh@5 -- # echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:19:39.154 17:38:34 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:19:39.154 17:38:34 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:19:39.154 17:38:34 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:19:39.154 17:38:34 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:19:39.154 17:38:34 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:19:39.154 17:38:34 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:19:39.154 17:38:34 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:19:39.154 17:38:34 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:19:39.154 17:38:34 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:19:39.154 17:38:34 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:39.154 17:38:34 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:19:39.154 17:38:34 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ FreeBSD == Linux ]] 00:19:39.154 17:38:34 nvme_scc -- nvme/nvme_scc.sh@12 -- # exit 0 00:19:39.154 00:19:39.154 real 0m0.167s 00:19:39.154 user 0m0.116s 00:19:39.154 sys 0m0.128s 00:19:39.154 17:38:34 nvme_scc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:39.154 17:38:34 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:19:39.154 ************************************ 00:19:39.154 END TEST nvme_scc 00:19:39.154 ************************************ 00:19:39.413 17:38:34 -- common/autotest_common.sh@1142 -- # return 0 00:19:39.413 17:38:34 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:19:39.413 17:38:34 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:19:39.413 17:38:34 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:19:39.413 17:38:34 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:19:39.413 17:38:34 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:19:39.413 17:38:34 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:19:39.413 17:38:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:39.413 17:38:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:39.413 17:38:34 -- common/autotest_common.sh@10 -- # set +x 00:19:39.413 ************************************ 00:19:39.413 START TEST nvme_rpc 00:19:39.413 ************************************ 00:19:39.413 17:38:35 nvme_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:19:39.413 * Looking for test storage... 00:19:39.413 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:19:39.413 17:38:35 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:39.413 17:38:35 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:19:39.413 17:38:35 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:19:39.413 17:38:35 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:19:39.413 17:38:35 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:19:39.413 17:38:35 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:19:39.413 17:38:35 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:19:39.413 17:38:35 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:19:39.413 17:38:35 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:39.413 17:38:35 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:39.413 17:38:35 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:19:39.413 17:38:35 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:19:39.413 17:38:35 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:19:39.413 17:38:35 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:19:39.413 17:38:35 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:19:39.413 17:38:35 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=69268 00:19:39.413 17:38:35 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:19:39.413 17:38:35 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:19:39.413 17:38:35 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 69268 00:19:39.413 17:38:35 nvme_rpc -- common/autotest_common.sh@829 -- # '[' -z 69268 ']' 00:19:39.413 17:38:35 nvme_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.413 17:38:35 nvme_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:39.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.413 17:38:35 nvme_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.413 17:38:35 nvme_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:39.413 17:38:35 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:39.413 [2024-07-15 17:38:35.178009] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:19:39.413 [2024-07-15 17:38:35.178245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:39.980 EAL: TSC is not safe to use in SMP mode 00:19:39.980 EAL: TSC is not invariant 00:19:39.980 [2024-07-15 17:38:35.747545] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:40.239 [2024-07-15 17:38:35.838536] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:40.239 [2024-07-15 17:38:35.838606] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:19:40.239 [2024-07-15 17:38:35.841388] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.239 [2024-07-15 17:38:35.841380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:40.497 17:38:36 nvme_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:40.497 17:38:36 nvme_rpc -- common/autotest_common.sh@862 -- # return 0 00:19:40.497 17:38:36 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:19:40.755 [2024-07-15 17:38:36.519504] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:40.755 Nvme0n1 00:19:41.013 17:38:36 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:19:41.013 17:38:36 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:19:41.271 request: 00:19:41.271 { 00:19:41.271 "bdev_name": "Nvme0n1", 00:19:41.271 "filename": "non_existing_file", 00:19:41.271 "method": "bdev_nvme_apply_firmware", 00:19:41.271 "req_id": 1 00:19:41.271 } 00:19:41.271 Got JSON-RPC error response 00:19:41.271 response: 00:19:41.271 { 00:19:41.271 "code": -32603, 00:19:41.271 "message": "open file failed." 00:19:41.271 } 00:19:41.271 17:38:36 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:19:41.271 17:38:36 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:19:41.271 17:38:36 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:19:41.529 17:38:37 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:19:41.529 17:38:37 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 69268 00:19:41.529 17:38:37 nvme_rpc -- common/autotest_common.sh@948 -- # '[' -z 69268 ']' 00:19:41.529 17:38:37 nvme_rpc -- common/autotest_common.sh@952 -- # kill -0 69268 00:19:41.529 17:38:37 nvme_rpc -- common/autotest_common.sh@953 -- # uname 00:19:41.529 17:38:37 nvme_rpc -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:19:41.529 17:38:37 nvme_rpc -- common/autotest_common.sh@956 -- # ps -c -o command 69268 00:19:41.529 17:38:37 nvme_rpc -- common/autotest_common.sh@956 -- # tail -1 00:19:41.529 17:38:37 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:19:41.529 killing process with pid 69268 00:19:41.529 17:38:37 nvme_rpc -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:19:41.529 17:38:37 nvme_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69268' 00:19:41.529 17:38:37 nvme_rpc -- common/autotest_common.sh@967 -- # kill 69268 00:19:41.529 17:38:37 nvme_rpc -- common/autotest_common.sh@972 -- # wait 69268 00:19:41.787 00:19:41.787 real 0m2.440s 00:19:41.787 user 0m4.519s 00:19:41.787 sys 0m0.863s 00:19:41.787 17:38:37 nvme_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:41.787 17:38:37 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:41.787 ************************************ 00:19:41.787 END TEST nvme_rpc 00:19:41.787 ************************************ 00:19:41.787 17:38:37 -- common/autotest_common.sh@1142 -- # return 0 00:19:41.787 17:38:37 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:19:41.787 17:38:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:41.787 17:38:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:41.787 17:38:37 -- common/autotest_common.sh@10 -- # set +x 00:19:41.787 ************************************ 00:19:41.787 START TEST nvme_rpc_timeouts 00:19:41.787 ************************************ 00:19:41.787 17:38:37 nvme_rpc_timeouts -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:19:41.787 * Looking for test storage... 00:19:41.787 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:19:41.787 17:38:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:41.787 17:38:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_69305 00:19:41.787 17:38:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_69305 00:19:41.787 17:38:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=69333 00:19:41.787 17:38:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:19:41.787 17:38:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:19:41.787 17:38:37 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 69333 00:19:41.787 17:38:37 nvme_rpc_timeouts -- common/autotest_common.sh@829 -- # '[' -z 69333 ']' 00:19:41.787 17:38:37 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.787 17:38:37 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:41.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.787 17:38:37 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.788 17:38:37 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:41.788 17:38:37 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:19:42.046 [2024-07-15 17:38:37.625782] Starting SPDK v24.09-pre git sha1 455fda465 / DPDK 24.03.0 initialization... 00:19:42.046 [2024-07-15 17:38:37.626077] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:42.643 EAL: TSC is not safe to use in SMP mode 00:19:42.643 EAL: TSC is not invariant 00:19:42.643 [2024-07-15 17:38:38.174576] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:42.643 [2024-07-15 17:38:38.258694] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:42.643 [2024-07-15 17:38:38.258779] app.c: 928:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:19:42.643 [2024-07-15 17:38:38.261612] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.643 [2024-07-15 17:38:38.261602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.209 17:38:38 nvme_rpc_timeouts -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:43.209 Checking default timeout settings: 00:19:43.209 17:38:38 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # return 0 00:19:43.209 17:38:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:19:43.209 17:38:38 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:19:43.467 Making settings changes with rpc: 00:19:43.467 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:19:43.467 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:19:43.724 Check default vs. modified settings: 00:19:43.724 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:19:43.724 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:19:43.983 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:19:43.983 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:19:43.983 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_69305 00:19:43.983 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:19:43.983 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:19:43.983 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:19:43.983 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_69305 00:19:43.983 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:19:43.983 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:19:43.983 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:19:43.983 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:19:43.983 Setting action_on_timeout is changed as expected. 00:19:43.983 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:19:43.983 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:19:43.983 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_69305 00:19:43.983 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:19:43.983 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:19:43.983 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:19:43.983 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_69305 00:19:43.983 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:19:43.983 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:19:43.983 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:19:43.983 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:19:43.983 Setting timeout_us is changed as expected. 00:19:43.983 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:19:43.983 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:19:43.983 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_69305 00:19:43.983 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:19:43.983 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:19:43.983 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:19:43.983 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_69305 00:19:43.984 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:19:43.984 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:19:43.984 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:19:43.984 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:19:43.984 Setting timeout_admin_us is changed as expected. 00:19:43.984 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:19:43.984 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:19:43.984 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_69305 /tmp/settings_modified_69305 00:19:43.984 17:38:39 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 69333 00:19:43.984 17:38:39 nvme_rpc_timeouts -- common/autotest_common.sh@948 -- # '[' -z 69333 ']' 00:19:43.984 17:38:39 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # kill -0 69333 00:19:43.984 17:38:39 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # uname 00:19:43.984 17:38:39 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:19:43.984 17:38:39 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps -c -o command 69333 00:19:43.984 17:38:39 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # tail -1 00:19:43.984 17:38:39 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:19:43.984 17:38:39 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:19:43.984 killing process with pid 69333 00:19:43.984 17:38:39 nvme_rpc_timeouts -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69333' 00:19:43.984 17:38:39 nvme_rpc_timeouts -- common/autotest_common.sh@967 -- # kill 69333 00:19:43.984 17:38:39 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # wait 69333 00:19:44.243 RPC TIMEOUT SETTING TEST PASSED. 00:19:44.243 17:38:40 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:19:44.243 00:19:44.243 real 0m2.588s 00:19:44.243 user 0m4.989s 00:19:44.243 sys 0m0.842s 00:19:44.510 17:38:40 nvme_rpc_timeouts -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:44.510 17:38:40 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:19:44.510 ************************************ 00:19:44.510 END TEST nvme_rpc_timeouts 00:19:44.510 ************************************ 00:19:44.510 17:38:40 -- common/autotest_common.sh@1142 -- # return 0 00:19:44.510 17:38:40 -- spdk/autotest.sh@243 -- # uname -s 00:19:44.510 17:38:40 -- spdk/autotest.sh@243 -- # '[' FreeBSD = Linux ']' 00:19:44.510 17:38:40 -- spdk/autotest.sh@247 -- # [[ 0 -eq 1 ]] 00:19:44.510 17:38:40 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:44.510 17:38:40 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:44.510 17:38:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:44.510 17:38:40 -- common/autotest_common.sh@10 -- # set +x 00:19:44.510 17:38:40 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:44.510 17:38:40 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:19:44.510 17:38:40 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:19:44.510 17:38:40 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:19:44.510 17:38:40 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:19:44.510 17:38:40 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:19:44.510 17:38:40 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:19:44.510 17:38:40 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:19:44.510 17:38:40 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:19:44.510 17:38:40 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:19:44.510 17:38:40 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:19:44.510 17:38:40 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:19:44.510 17:38:40 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:19:44.510 17:38:40 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:19:44.510 17:38:40 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:19:44.510 17:38:40 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:19:44.510 17:38:40 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:19:44.510 17:38:40 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:19:44.510 17:38:40 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:19:44.510 17:38:40 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:19:44.510 17:38:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:44.510 17:38:40 -- common/autotest_common.sh@10 -- # set +x 00:19:44.510 17:38:40 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:19:44.510 17:38:40 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:19:44.510 17:38:40 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:19:44.510 17:38:40 -- common/autotest_common.sh@10 -- # set +x 00:19:45.077 setup.sh cleanup function not yet supported on FreeBSD 00:19:45.077 17:38:40 -- common/autotest_common.sh@1451 -- # return 0 00:19:45.077 17:38:40 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:19:45.077 17:38:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:45.077 17:38:40 -- common/autotest_common.sh@10 -- # set +x 00:19:45.077 17:38:40 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:19:45.077 17:38:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:45.077 17:38:40 -- common/autotest_common.sh@10 -- # set +x 00:19:45.336 17:38:40 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:45.336 17:38:40 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:45.336 17:38:40 -- spdk/autotest.sh@391 -- # hash lcov 00:19:45.336 /home/vagrant/spdk_repo/spdk/autotest.sh: line 391: hash: lcov: not found 00:19:45.336 17:38:41 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:45.336 17:38:41 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:19:45.336 17:38:41 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:45.336 17:38:41 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:45.336 17:38:41 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:19:45.336 17:38:41 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:19:45.336 17:38:41 -- paths/export.sh@4 -- $ export PATH 00:19:45.336 17:38:41 -- paths/export.sh@5 -- $ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:19:45.336 17:38:41 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:19:45.336 17:38:41 -- common/autobuild_common.sh@444 -- $ date +%s 00:19:45.336 17:38:41 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721065121.XXXXXX 00:19:45.336 17:38:41 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721065121.XXXXXX.h5eq98wSPZ 00:19:45.336 17:38:41 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:19:45.336 17:38:41 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:19:45.336 17:38:41 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:19:45.336 17:38:41 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:19:45.336 17:38:41 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:19:45.336 17:38:41 -- common/autobuild_common.sh@460 -- $ get_config_params 00:19:45.336 17:38:41 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:19:45.336 17:38:41 -- common/autotest_common.sh@10 -- $ set +x 00:19:45.336 17:38:41 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:19:45.336 17:38:41 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:19:45.336 17:38:41 -- pm/common@17 -- $ local monitor 00:19:45.336 17:38:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:19:45.336 17:38:41 -- pm/common@25 -- $ sleep 1 00:19:45.336 17:38:41 -- pm/common@21 -- $ date +%s 00:19:45.336 17:38:41 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1721065121 00:19:45.595 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1721065121_collect-vmstat.pm.log 00:19:46.530 17:38:42 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:19:46.530 17:38:42 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:19:46.530 17:38:42 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:19:46.530 17:38:42 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:19:46.530 17:38:42 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:19:46.530 17:38:42 -- spdk/autopackage.sh@19 -- $ timing_finish 00:19:46.530 17:38:42 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:19:46.530 17:38:42 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:19:46.530 17:38:42 -- spdk/autopackage.sh@20 -- $ exit 0 00:19:46.530 17:38:42 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:19:46.530 17:38:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:19:46.530 17:38:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:19:46.530 17:38:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:19:46.530 17:38:42 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:19:46.530 17:38:42 -- pm/common@44 -- $ pid=69556 00:19:46.530 17:38:42 -- pm/common@50 -- $ kill -TERM 69556 00:19:46.530 + [[ -n 1231 ]] 00:19:46.530 + sudo kill 1231 00:19:46.537 [Pipeline] } 00:19:46.551 [Pipeline] // timeout 00:19:46.555 [Pipeline] } 00:19:46.567 [Pipeline] // stage 00:19:46.571 [Pipeline] } 00:19:46.582 [Pipeline] // catchError 00:19:46.590 [Pipeline] stage 00:19:46.591 [Pipeline] { (Stop VM) 00:19:46.604 [Pipeline] sh 00:19:46.884 + vagrant halt 00:19:50.173 ==> default: Halting domain... 00:20:12.152 [Pipeline] sh 00:20:12.428 + vagrant destroy -f 00:20:16.620 ==> default: Removing domain... 00:20:16.632 [Pipeline] sh 00:20:16.914 + mv output /var/jenkins/workspace/freebsd-vg-autotest/output 00:20:16.924 [Pipeline] } 00:20:16.943 [Pipeline] // stage 00:20:16.949 [Pipeline] } 00:20:16.967 [Pipeline] // dir 00:20:16.973 [Pipeline] } 00:20:16.991 [Pipeline] // wrap 00:20:16.998 [Pipeline] } 00:20:17.015 [Pipeline] // catchError 00:20:17.025 [Pipeline] stage 00:20:17.028 [Pipeline] { (Epilogue) 00:20:17.044 [Pipeline] sh 00:20:17.325 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:17.338 [Pipeline] catchError 00:20:17.339 [Pipeline] { 00:20:17.356 [Pipeline] sh 00:20:17.710 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:17.710 Artifacts sizes are good 00:20:17.717 [Pipeline] } 00:20:17.732 [Pipeline] // catchError 00:20:17.742 [Pipeline] archiveArtifacts 00:20:17.748 Archiving artifacts 00:20:17.785 [Pipeline] cleanWs 00:20:17.815 [WS-CLEANUP] Deleting project workspace... 00:20:17.815 [WS-CLEANUP] Deferred wipeout is used... 00:20:17.821 [WS-CLEANUP] done 00:20:17.823 [Pipeline] } 00:20:17.840 [Pipeline] // stage 00:20:17.844 [Pipeline] } 00:20:17.862 [Pipeline] // node 00:20:17.867 [Pipeline] End of Pipeline 00:20:17.899 Finished: SUCCESS